Test Report: Hyper-V_Windows 19044

                    
                      eee102c36261874e9b159e2a7f565a8081da63a0:2024-06-12:34865
                    
                

Test fail (14/200)

x
+
TestAddons/parallel/Registry (72.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 13.997ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-f9trb" [e6c13dcd-e52f-4d4b-ab41-b525ce55df5f] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0155687s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vqd8p" [04cfc241-3db6-42fc-965e-b4d28c1dd4e7] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0211972s
addons_test.go:342: (dbg) Run:  kubectl --context addons-605800 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-605800 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-605800 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.4758642s)
addons_test.go:361: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-605800 ip
addons_test.go:361: (dbg) Done: out/minikube-windows-amd64.exe -p addons-605800 ip: (2.6676665s)
addons_test.go:366: expected stderr to be -empty- but got: *"W0612 13:05:07.745709    3540 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-605800 ip"
2024/06/12 13:05:10 [DEBUG] GET http://172.23.204.232:5000
addons_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-605800 addons disable registry --alsologtostderr -v=1
addons_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe -p addons-605800 addons disable registry --alsologtostderr -v=1: (15.8474671s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-605800 -n addons-605800
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-605800 -n addons-605800: (13.1279434s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-605800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-605800 logs -n 25: (9.936814s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-328400 | minikube1\jenkins | v1.33.1 | 12 Jun 24 12:56 PDT |                     |
	|         | -p download-only-328400              |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr            |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |                   |         |                     |                     |
	|         | --container-runtime=docker           |                      |                   |         |                     |                     |
	|         | --driver=hyperv                      |                      |                   |         |                     |                     |
	| delete  | --all                                | minikube             | minikube1\jenkins | v1.33.1 | 12 Jun 24 12:56 PDT | 12 Jun 24 12:56 PDT |
	| delete  | -p download-only-328400              | download-only-328400 | minikube1\jenkins | v1.33.1 | 12 Jun 24 12:56 PDT | 12 Jun 24 12:57 PDT |
	| start   | -o=json --download-only              | download-only-880500 | minikube1\jenkins | v1.33.1 | 12 Jun 24 12:57 PDT |                     |
	|         | -p download-only-880500              |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr            |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.1         |                      |                   |         |                     |                     |
	|         | --container-runtime=docker           |                      |                   |         |                     |                     |
	|         | --driver=hyperv                      |                      |                   |         |                     |                     |
	| delete  | --all                                | minikube             | minikube1\jenkins | v1.33.1 | 12 Jun 24 12:57 PDT | 12 Jun 24 12:57 PDT |
	| delete  | -p download-only-880500              | download-only-880500 | minikube1\jenkins | v1.33.1 | 12 Jun 24 12:57 PDT | 12 Jun 24 12:57 PDT |
	| delete  | -p download-only-328400              | download-only-328400 | minikube1\jenkins | v1.33.1 | 12 Jun 24 12:57 PDT | 12 Jun 24 12:57 PDT |
	| delete  | -p download-only-880500              | download-only-880500 | minikube1\jenkins | v1.33.1 | 12 Jun 24 12:57 PDT | 12 Jun 24 12:57 PDT |
	| start   | --download-only -p                   | binary-mirror-097300 | minikube1\jenkins | v1.33.1 | 12 Jun 24 12:57 PDT |                     |
	|         | binary-mirror-097300                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr                    |                      |                   |         |                     |                     |
	|         | --binary-mirror                      |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:58105               |                      |                   |         |                     |                     |
	|         | --driver=hyperv                      |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-097300              | binary-mirror-097300 | minikube1\jenkins | v1.33.1 | 12 Jun 24 12:57 PDT | 12 Jun 24 12:57 PDT |
	| addons  | disable dashboard -p                 | addons-605800        | minikube1\jenkins | v1.33.1 | 12 Jun 24 12:57 PDT |                     |
	|         | addons-605800                        |                      |                   |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-605800        | minikube1\jenkins | v1.33.1 | 12 Jun 24 12:57 PDT |                     |
	|         | addons-605800                        |                      |                   |         |                     |                     |
	| start   | -p addons-605800 --wait=true         | addons-605800        | minikube1\jenkins | v1.33.1 | 12 Jun 24 12:57 PDT | 12 Jun 24 13:04 PDT |
	|         | --memory=4000 --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --addons=registry                    |                      |                   |         |                     |                     |
	|         | --addons=metrics-server              |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |                   |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |                   |         |                     |                     |
	|         | --driver=hyperv --addons=ingress     |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                 |                      |                   |         |                     |                     |
	| addons  | enable headlamp                      | addons-605800        | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:04 PDT | 12 Jun 24 13:05 PDT |
	|         | -p addons-605800                     |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |                   |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-605800        | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:04 PDT | 12 Jun 24 13:05 PDT |
	|         | addons-605800                        |                      |                   |         |                     |                     |
	| addons  | addons-605800 addons disable         | addons-605800        | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:05 PDT | 12 Jun 24 13:05 PDT |
	|         | helm-tiller --alsologtostderr        |                      |                   |         |                     |                     |
	|         | -v=1                                 |                      |                   |         |                     |                     |
	| ip      | addons-605800 ip                     | addons-605800        | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:05 PDT | 12 Jun 24 13:05 PDT |
	| addons  | addons-605800 addons disable         | addons-605800        | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:05 PDT | 12 Jun 24 13:05 PDT |
	|         | registry --alsologtostderr           |                      |                   |         |                     |                     |
	|         | -v=1                                 |                      |                   |         |                     |                     |
	| addons  | addons-605800 addons                 | addons-605800        | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:05 PDT |                     |
	|         | disable metrics-server               |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |                   |         |                     |                     |
	| ssh     | addons-605800 ssh curl -s            | addons-605800        | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:05 PDT |                     |
	|         | http://127.0.0.1/ -H 'Host:          |                      |                   |         |                     |                     |
	|         | nginx.example.com'                   |                      |                   |         |                     |                     |
	|---------|--------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/12 12:57:27
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0612 12:57:27.295556    2768 out.go:291] Setting OutFile to fd 884 ...
	I0612 12:57:27.296701    2768 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 12:57:27.296701    2768 out.go:304] Setting ErrFile to fd 888...
	I0612 12:57:27.296701    2768 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 12:57:27.321395    2768 out.go:298] Setting JSON to false
	I0612 12:57:27.325038    2768 start.go:129] hostinfo: {"hostname":"minikube1","uptime":20600,"bootTime":1718201647,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4529 Build 19045.4529","kernelVersion":"10.0.19045.4529 Build 19045.4529","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0612 12:57:27.326038    2768 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0612 12:57:27.330866    2768 out.go:177] * [addons-605800] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	I0612 12:57:27.334902    2768 notify.go:220] Checking for updates...
	I0612 12:57:27.337467    2768 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 12:57:27.339810    2768 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 12:57:27.342670    2768 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0612 12:57:27.345037    2768 out.go:177]   - MINIKUBE_LOCATION=19044
	I0612 12:57:27.347904    2768 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 12:57:27.351625    2768 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 12:57:32.735534    2768 out.go:177] * Using the hyperv driver based on user configuration
	I0612 12:57:32.735534    2768 start.go:297] selected driver: hyperv
	I0612 12:57:32.735534    2768 start.go:901] validating driver "hyperv" against <nil>
	I0612 12:57:32.735534    2768 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 12:57:32.791743    2768 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0612 12:57:32.792501    2768 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 12:57:32.792501    2768 cni.go:84] Creating CNI manager for ""
	I0612 12:57:32.793080    2768 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0612 12:57:32.793080    2768 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0612 12:57:32.793283    2768 start.go:340] cluster config:
	{Name:addons-605800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-605800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 12:57:32.793283    2768 iso.go:125] acquiring lock: {Name:mk052eb609047b80b971cea5054470b0706b5b41 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 12:57:32.797481    2768 out.go:177] * Starting "addons-605800" primary control-plane node in "addons-605800" cluster
	I0612 12:57:32.800465    2768 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0612 12:57:32.800709    2768 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0612 12:57:32.800751    2768 cache.go:56] Caching tarball of preloaded images
	I0612 12:57:32.800858    2768 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0612 12:57:32.800858    2768 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0612 12:57:32.802031    2768 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\config.json ...
	I0612 12:57:32.802436    2768 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\config.json: {Name:mkf964893e036904d7ca90efe3d72e7a3e56adcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 12:57:32.803824    2768 start.go:360] acquireMachinesLock for addons-605800: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 12:57:32.803824    2768 start.go:364] duration metric: took 0s to acquireMachinesLock for "addons-605800"
	I0612 12:57:32.803824    2768 start.go:93] Provisioning new machine with config: &{Name:addons-605800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:addons-605800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0612 12:57:32.804390    2768 start.go:125] createHost starting for "" (driver="hyperv")
	I0612 12:57:32.808345    2768 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0612 12:57:32.808345    2768 start.go:159] libmachine.API.Create for "addons-605800" (driver="hyperv")
	I0612 12:57:32.808345    2768 client.go:168] LocalClient.Create starting
	I0612 12:57:32.809744    2768 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0612 12:57:32.935110    2768 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0612 12:57:33.248528    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0612 12:57:35.271776    2768 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0612 12:57:35.272589    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:57:35.272589    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0612 12:57:36.933507    2768 main.go:141] libmachine: [stdout =====>] : False
	
	I0612 12:57:36.934087    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:57:36.934171    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0612 12:57:38.395949    2768 main.go:141] libmachine: [stdout =====>] : True
	
	I0612 12:57:38.396384    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:57:38.396462    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0612 12:57:42.069807    2768 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0612 12:57:42.069807    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:57:42.072710    2768 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso...
	I0612 12:57:42.564276    2768 main.go:141] libmachine: Creating SSH key...
	I0612 12:57:42.722124    2768 main.go:141] libmachine: Creating VM...
	I0612 12:57:42.722124    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0612 12:57:45.526148    2768 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0612 12:57:45.526148    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:57:45.526148    2768 main.go:141] libmachine: Using switch "Default Switch"
	I0612 12:57:45.526148    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0612 12:57:47.231145    2768 main.go:141] libmachine: [stdout =====>] : True
	
	I0612 12:57:47.231981    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:57:47.231981    2768 main.go:141] libmachine: Creating VHD
	I0612 12:57:47.232056    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-605800\fixed.vhd' -SizeBytes 10MB -Fixed
	I0612 12:57:51.002744    2768 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-605800\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 38CC249A-AE33-41D4-B836-58D67931AD2F
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0612 12:57:51.003556    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:57:51.003556    2768 main.go:141] libmachine: Writing magic tar header
	I0612 12:57:51.003708    2768 main.go:141] libmachine: Writing SSH key tar header
	I0612 12:57:51.012129    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-605800\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-605800\disk.vhd' -VHDType Dynamic -DeleteSource
	I0612 12:57:54.214162    2768 main.go:141] libmachine: [stdout =====>] : 
	I0612 12:57:54.214162    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:57:54.214162    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-605800\disk.vhd' -SizeBytes 20000MB
	I0612 12:57:56.745794    2768 main.go:141] libmachine: [stdout =====>] : 
	I0612 12:57:56.745794    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:57:56.746621    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-605800 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-605800' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0612 12:58:00.401906    2768 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-605800 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0612 12:58:00.402759    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:58:00.402759    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-605800 -DynamicMemoryEnabled $false
	I0612 12:58:02.597233    2768 main.go:141] libmachine: [stdout =====>] : 
	I0612 12:58:02.597233    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:58:02.597233    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-605800 -Count 2
	I0612 12:58:04.709568    2768 main.go:141] libmachine: [stdout =====>] : 
	I0612 12:58:04.709568    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:58:04.709775    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-605800 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-605800\boot2docker.iso'
	I0612 12:58:07.255384    2768 main.go:141] libmachine: [stdout =====>] : 
	I0612 12:58:07.255384    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:58:07.256244    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-605800 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-605800\disk.vhd'
	I0612 12:58:09.992683    2768 main.go:141] libmachine: [stdout =====>] : 
	I0612 12:58:09.992908    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:58:09.992967    2768 main.go:141] libmachine: Starting VM...
	I0612 12:58:09.992967    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-605800
	I0612 12:58:13.202517    2768 main.go:141] libmachine: [stdout =====>] : 
	I0612 12:58:13.202960    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:58:13.202960    2768 main.go:141] libmachine: Waiting for host to start...
	I0612 12:58:13.203031    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 12:58:15.549598    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 12:58:15.549738    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:58:15.549738    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 12:58:18.121988    2768 main.go:141] libmachine: [stdout =====>] : 
	I0612 12:58:18.121988    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:58:19.137177    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 12:58:21.399837    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 12:58:21.400060    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:58:21.400149    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 12:58:23.946823    2768 main.go:141] libmachine: [stdout =====>] : 
	I0612 12:58:23.946823    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:58:24.950743    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 12:58:27.160648    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 12:58:27.161591    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:58:27.161712    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 12:58:29.754197    2768 main.go:141] libmachine: [stdout =====>] : 
	I0612 12:58:29.754197    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:58:30.759910    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 12:58:32.990802    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 12:58:32.990802    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:58:32.990802    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 12:58:35.450719    2768 main.go:141] libmachine: [stdout =====>] : 
	I0612 12:58:35.451645    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:58:36.463051    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 12:58:38.668981    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 12:58:38.669626    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:58:38.669626    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 12:58:41.275704    2768 main.go:141] libmachine: [stdout =====>] : 172.23.204.232
	
	I0612 12:58:41.275704    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:58:41.276543    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 12:58:43.393334    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 12:58:43.394013    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:58:43.394328    2768 machine.go:94] provisionDockerMachine start ...
	I0612 12:58:43.394506    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 12:58:45.517615    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 12:58:45.517615    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:58:45.517918    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 12:58:47.983284    2768 main.go:141] libmachine: [stdout =====>] : 172.23.204.232
	
	I0612 12:58:47.983710    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:58:47.989717    2768 main.go:141] libmachine: Using SSH client type: native
	I0612 12:58:48.000955    2768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.204.232 22 <nil> <nil>}
	I0612 12:58:48.000955    2768 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 12:58:48.138693    2768 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 12:58:48.138794    2768 buildroot.go:166] provisioning hostname "addons-605800"
	I0612 12:58:48.138880    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 12:58:50.249198    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 12:58:50.249198    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:58:50.249198    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 12:58:52.757130    2768 main.go:141] libmachine: [stdout =====>] : 172.23.204.232
	
	I0612 12:58:52.757130    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:58:52.762738    2768 main.go:141] libmachine: Using SSH client type: native
	I0612 12:58:52.763776    2768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.204.232 22 <nil> <nil>}
	I0612 12:58:52.763776    2768 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-605800 && echo "addons-605800" | sudo tee /etc/hostname
	I0612 12:58:52.925533    2768 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-605800
	
	I0612 12:58:52.925533    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 12:58:55.052382    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 12:58:55.052382    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:58:55.053354    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 12:58:57.666462    2768 main.go:141] libmachine: [stdout =====>] : 172.23.204.232
	
	I0612 12:58:57.666729    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:58:57.673003    2768 main.go:141] libmachine: Using SSH client type: native
	I0612 12:58:57.673003    2768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.204.232 22 <nil> <nil>}
	I0612 12:58:57.673588    2768 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-605800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-605800/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-605800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 12:58:57.834025    2768 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 12:58:57.834025    2768 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0612 12:58:57.834025    2768 buildroot.go:174] setting up certificates
	I0612 12:58:57.834025    2768 provision.go:84] configureAuth start
	I0612 12:58:57.834025    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 12:58:59.960901    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 12:58:59.961072    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:58:59.961072    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 12:59:02.417959    2768 main.go:141] libmachine: [stdout =====>] : 172.23.204.232
	
	I0612 12:59:02.417959    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:59:02.418877    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 12:59:04.507516    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 12:59:04.507516    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:59:04.508200    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 12:59:06.966306    2768 main.go:141] libmachine: [stdout =====>] : 172.23.204.232
	
	I0612 12:59:06.966306    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:59:06.967365    2768 provision.go:143] copyHostCerts
	I0612 12:59:06.967970    2768 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0612 12:59:06.969762    2768 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0612 12:59:06.970809    2768 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0612 12:59:06.971540    2768 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-605800 san=[127.0.0.1 172.23.204.232 addons-605800 localhost minikube]
	I0612 12:59:07.202472    2768 provision.go:177] copyRemoteCerts
	I0612 12:59:07.215246    2768 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 12:59:07.215319    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 12:59:09.312513    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 12:59:09.313358    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:59:09.313358    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 12:59:11.855070    2768 main.go:141] libmachine: [stdout =====>] : 172.23.204.232
	
	I0612 12:59:11.855070    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:59:11.855797    2768 sshutil.go:53] new ssh client: &{IP:172.23.204.232 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-605800\id_rsa Username:docker}
	I0612 12:59:11.969298    2768 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7539643s)
	I0612 12:59:11.970115    2768 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 12:59:12.016368    2768 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0612 12:59:12.065316    2768 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0612 12:59:12.116242    2768 provision.go:87] duration metric: took 14.2821745s to configureAuth
	I0612 12:59:12.116242    2768 buildroot.go:189] setting minikube options for container-runtime
	I0612 12:59:12.117272    2768 config.go:182] Loaded profile config "addons-605800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 12:59:12.117272    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 12:59:14.323449    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 12:59:14.323449    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:59:14.324029    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 12:59:16.894506    2768 main.go:141] libmachine: [stdout =====>] : 172.23.204.232
	
	I0612 12:59:16.895421    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:59:16.901163    2768 main.go:141] libmachine: Using SSH client type: native
	I0612 12:59:16.901576    2768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.204.232 22 <nil> <nil>}
	I0612 12:59:16.901576    2768 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0612 12:59:17.034135    2768 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0612 12:59:17.034200    2768 buildroot.go:70] root file system type: tmpfs
	I0612 12:59:17.034466    2768 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0612 12:59:17.034547    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 12:59:19.188953    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 12:59:19.188953    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:59:19.189725    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 12:59:21.696357    2768 main.go:141] libmachine: [stdout =====>] : 172.23.204.232
	
	I0612 12:59:21.696545    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:59:21.703019    2768 main.go:141] libmachine: Using SSH client type: native
	I0612 12:59:21.703583    2768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.204.232 22 <nil> <nil>}
	I0612 12:59:21.703690    2768 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0612 12:59:21.862106    2768 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0612 12:59:21.862106    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 12:59:24.002188    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 12:59:24.002188    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:59:24.002188    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 12:59:26.511838    2768 main.go:141] libmachine: [stdout =====>] : 172.23.204.232
	
	I0612 12:59:26.511838    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:59:26.517433    2768 main.go:141] libmachine: Using SSH client type: native
	I0612 12:59:26.517691    2768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.204.232 22 <nil> <nil>}
	I0612 12:59:26.517691    2768 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0612 12:59:28.688089    2768 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0612 12:59:28.688089    2768 machine.go:97] duration metric: took 45.2936248s to provisionDockerMachine
	I0612 12:59:28.688089    2768 client.go:171] duration metric: took 1m55.8793962s to LocalClient.Create
	I0612 12:59:28.688089    2768 start.go:167] duration metric: took 1m55.8793962s to libmachine.API.Create "addons-605800"
	I0612 12:59:28.688089    2768 start.go:293] postStartSetup for "addons-605800" (driver="hyperv")
	I0612 12:59:28.688089    2768 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 12:59:28.700107    2768 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 12:59:28.700107    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 12:59:30.896877    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 12:59:30.896877    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:59:30.896877    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 12:59:33.529387    2768 main.go:141] libmachine: [stdout =====>] : 172.23.204.232
	
	I0612 12:59:33.529540    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:59:33.529540    2768 sshutil.go:53] new ssh client: &{IP:172.23.204.232 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-605800\id_rsa Username:docker}
	I0612 12:59:33.643228    2768 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9430562s)
	I0612 12:59:33.657036    2768 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 12:59:33.663434    2768 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 12:59:33.663434    2768 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0612 12:59:33.663434    2768 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0612 12:59:33.663434    2768 start.go:296] duration metric: took 4.9753302s for postStartSetup
	I0612 12:59:33.666741    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 12:59:35.842742    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 12:59:35.843918    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:59:35.844030    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 12:59:38.453719    2768 main.go:141] libmachine: [stdout =====>] : 172.23.204.232
	
	I0612 12:59:38.453719    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:59:38.454616    2768 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\config.json ...
	I0612 12:59:38.458002    2768 start.go:128] duration metric: took 2m5.6532353s to createHost
	I0612 12:59:38.458160    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 12:59:40.651447    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 12:59:40.651447    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:59:40.651447    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 12:59:43.220399    2768 main.go:141] libmachine: [stdout =====>] : 172.23.204.232
	
	I0612 12:59:43.221469    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:59:43.226964    2768 main.go:141] libmachine: Using SSH client type: native
	I0612 12:59:43.226964    2768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.204.232 22 <nil> <nil>}
	I0612 12:59:43.226964    2768 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 12:59:43.373019    2768 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718222383.371941318
	
	I0612 12:59:43.373102    2768 fix.go:216] guest clock: 1718222383.371941318
	I0612 12:59:43.373102    2768 fix.go:229] Guest: 2024-06-12 12:59:43.371941318 -0700 PDT Remote: 2024-06-12 12:59:38.4580974 -0700 PDT m=+131.252409601 (delta=4.913843918s)
	I0612 12:59:43.373166    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 12:59:45.507786    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 12:59:45.507786    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:59:45.508640    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 12:59:48.055335    2768 main.go:141] libmachine: [stdout =====>] : 172.23.204.232
	
	I0612 12:59:48.055496    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:59:48.060997    2768 main.go:141] libmachine: Using SSH client type: native
	I0612 12:59:48.061757    2768 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.204.232 22 <nil> <nil>}
	I0612 12:59:48.061757    2768 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1718222383
	I0612 12:59:48.214897    2768 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jun 12 19:59:43 UTC 2024
	
	I0612 12:59:48.214897    2768 fix.go:236] clock set: Wed Jun 12 19:59:43 UTC 2024
	 (err=<nil>)
	I0612 12:59:48.214897    2768 start.go:83] releasing machines lock for "addons-605800", held for 2m15.4106663s
	I0612 12:59:48.214897    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 12:59:50.380127    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 12:59:50.380127    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:59:50.380127    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 12:59:52.865143    2768 main.go:141] libmachine: [stdout =====>] : 172.23.204.232
	
	I0612 12:59:52.865143    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:59:52.871088    2768 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 12:59:52.871256    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 12:59:52.880990    2768 ssh_runner.go:195] Run: cat /version.json
	I0612 12:59:52.880990    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 12:59:55.121198    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 12:59:55.121198    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:59:55.122131    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 12:59:55.129991    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 12:59:55.129991    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:59:55.130244    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 12:59:57.817439    2768 main.go:141] libmachine: [stdout =====>] : 172.23.204.232
	
	I0612 12:59:57.817476    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:59:57.817644    2768 sshutil.go:53] new ssh client: &{IP:172.23.204.232 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-605800\id_rsa Username:docker}
	I0612 12:59:57.834862    2768 main.go:141] libmachine: [stdout =====>] : 172.23.204.232
	
	I0612 12:59:57.834862    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 12:59:57.835587    2768 sshutil.go:53] new ssh client: &{IP:172.23.204.232 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-605800\id_rsa Username:docker}
	I0612 12:59:58.020140    2768 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1490358s)
	I0612 12:59:58.020204    2768 ssh_runner.go:235] Completed: cat /version.json: (5.1391984s)
	I0612 12:59:58.032779    2768 ssh_runner.go:195] Run: systemctl --version
	I0612 12:59:58.059722    2768 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 12:59:58.069383    2768 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 12:59:58.080981    2768 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 12:59:58.113592    2768 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 12:59:58.113592    2768 start.go:494] detecting cgroup driver to use...
	I0612 12:59:58.113592    2768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 12:59:58.161795    2768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0612 12:59:58.197190    2768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0612 12:59:58.218843    2768 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0612 12:59:58.232576    2768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0612 12:59:58.264744    2768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0612 12:59:58.295474    2768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0612 12:59:58.325930    2768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0612 12:59:58.358703    2768 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 12:59:58.389197    2768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0612 12:59:58.421087    2768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0612 12:59:58.454647    2768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0612 12:59:58.488408    2768 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 12:59:58.516911    2768 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 12:59:58.547490    2768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 12:59:58.757340    2768 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0612 12:59:58.788054    2768 start.go:494] detecting cgroup driver to use...
	I0612 12:59:58.801012    2768 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0612 12:59:58.841353    2768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 12:59:58.874204    2768 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 12:59:58.921395    2768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 12:59:58.957328    2768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0612 12:59:58.992555    2768 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0612 12:59:59.061353    2768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0612 12:59:59.089792    2768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 12:59:59.136915    2768 ssh_runner.go:195] Run: which cri-dockerd
	I0612 12:59:59.158847    2768 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0612 12:59:59.175800    2768 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0612 12:59:59.221888    2768 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0612 12:59:59.424414    2768 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0612 12:59:59.607757    2768 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0612 12:59:59.607757    2768 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0612 12:59:59.654535    2768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 12:59:59.833476    2768 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0612 13:00:02.366516    2768 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5329958s)
	I0612 13:00:02.392000    2768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0612 13:00:02.429622    2768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0612 13:00:02.462706    2768 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0612 13:00:02.658680    2768 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0612 13:00:02.864455    2768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:00:03.074421    2768 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0612 13:00:03.123228    2768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0612 13:00:03.167396    2768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:00:03.364838    2768 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0612 13:00:03.468315    2768 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0612 13:00:03.479294    2768 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0612 13:00:03.489294    2768 start.go:562] Will wait 60s for crictl version
	I0612 13:00:03.503292    2768 ssh_runner.go:195] Run: which crictl
	I0612 13:00:03.520870    2768 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 13:00:03.574118    2768 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0612 13:00:03.583143    2768 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0612 13:00:03.621208    2768 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0612 13:00:03.656754    2768 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0612 13:00:03.656993    2768 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0612 13:00:03.661751    2768 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0612 13:00:03.661751    2768 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0612 13:00:03.661751    2768 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0612 13:00:03.661751    2768 ip.go:207] Found interface: {Index:16 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:56:a0:18 Flags:up|broadcast|multicast|running}
	I0612 13:00:03.664325    2768 ip.go:210] interface addr: fe80::52c5:dd8:dd1e:a400/64
	I0612 13:00:03.664856    2768 ip.go:210] interface addr: 172.23.192.1/20
	I0612 13:00:03.676239    2768 ssh_runner.go:195] Run: grep 172.23.192.1	host.minikube.internal$ /etc/hosts
	I0612 13:00:03.682261    2768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 13:00:03.702596    2768 kubeadm.go:877] updating cluster {Name:addons-605800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:addons-605800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.204.232 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 13:00:03.702791    2768 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0612 13:00:03.711397    2768 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0612 13:00:03.730815    2768 docker.go:685] Got preloaded images: 
	I0612 13:00:03.730815    2768 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0612 13:00:03.744711    2768 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0612 13:00:03.773192    2768 ssh_runner.go:195] Run: which lz4
	I0612 13:00:03.792983    2768 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0612 13:00:03.798891    2768 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0612 13:00:03.799092    2768 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0612 13:00:05.497931    2768 docker.go:649] duration metric: took 1.7174273s to copy over tarball
	I0612 13:00:05.510926    2768 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0612 13:00:11.244751    2768 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (5.7337066s)
	I0612 13:00:11.244751    2768 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0612 13:00:11.309774    2768 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0612 13:00:11.330480    2768 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0612 13:00:11.379326    2768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:00:11.590392    2768 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0612 13:00:17.281345    2768 ssh_runner.go:235] Completed: sudo systemctl restart docker: (5.6909357s)
	I0612 13:00:17.291058    2768 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0612 13:00:17.319254    2768 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0612 13:00:17.319254    2768 cache_images.go:84] Images are preloaded, skipping loading
	I0612 13:00:17.319254    2768 kubeadm.go:928] updating node { 172.23.204.232 8443 v1.30.1 docker true true} ...
	I0612 13:00:17.319254    2768 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-605800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.204.232
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:addons-605800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 13:00:17.329276    2768 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0612 13:00:17.371197    2768 cni.go:84] Creating CNI manager for ""
	I0612 13:00:17.371197    2768 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0612 13:00:17.371197    2768 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 13:00:17.371197    2768 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.23.204.232 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-605800 NodeName:addons-605800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.23.204.232"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.23.204.232 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 13:00:17.371722    2768 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.23.204.232
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-605800"
	  kubeletExtraArgs:
	    node-ip: 172.23.204.232
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.23.204.232"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 13:00:17.384240    2768 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 13:00:17.402163    2768 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 13:00:17.417619    2768 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 13:00:17.439086    2768 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0612 13:00:17.469669    2768 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 13:00:17.499066    2768 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0612 13:00:17.544174    2768 ssh_runner.go:195] Run: grep 172.23.204.232	control-plane.minikube.internal$ /etc/hosts
	I0612 13:00:17.549651    2768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.204.232	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 13:00:17.594401    2768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:00:17.792660    2768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 13:00:17.822544    2768 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800 for IP: 172.23.204.232
	I0612 13:00:17.822602    2768 certs.go:194] generating shared ca certs ...
	I0612 13:00:17.822684    2768 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:00:17.823075    2768 certs.go:240] generating "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0612 13:00:17.972492    2768 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt ...
	I0612 13:00:17.972492    2768 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt: {Name:mk7a559291b59fd1cacf23acd98c76aadd417440 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:00:17.974281    2768 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key ...
	I0612 13:00:17.974281    2768 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key: {Name:mkbedd9bb05780b48b3744f1500f6ab6cea55798 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:00:17.975166    2768 certs.go:240] generating "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0612 13:00:18.185093    2768 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0612 13:00:18.186089    2768 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mkd3d06d8ce13b6ea5bb86cd17b70e85416bbf21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:00:18.186862    2768 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key ...
	I0612 13:00:18.186862    2768 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key: {Name:mkf3a613f937d3e2839d9a0e4a8e5134d5e75dad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:00:18.188393    2768 certs.go:256] generating profile certs ...
	I0612 13:00:18.189062    2768 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.key
	I0612 13:00:18.189062    2768 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt with IP's: []
	I0612 13:00:18.681862    2768 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt ...
	I0612 13:00:18.681862    2768 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: {Name:mk55487ccceede7d120b2c517ed1a4496c55cee5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:00:18.683908    2768 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.key ...
	I0612 13:00:18.683908    2768 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.key: {Name:mk3b6f71b29aa8189dddb07b02ee8f3bf132ab73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:00:18.685263    2768 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\apiserver.key.0c40713b
	I0612 13:00:18.685617    2768 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\apiserver.crt.0c40713b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.23.204.232]
	I0612 13:00:18.849996    2768 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\apiserver.crt.0c40713b ...
	I0612 13:00:18.849996    2768 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\apiserver.crt.0c40713b: {Name:mk76cc0ded25d23a7791ff1899c81e5b33d70b7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:00:18.852191    2768 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\apiserver.key.0c40713b ...
	I0612 13:00:18.852191    2768 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\apiserver.key.0c40713b: {Name:mkff1433731cbdd0b145fcccd91421618c329652 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:00:18.852892    2768 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\apiserver.crt.0c40713b -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\apiserver.crt
	I0612 13:00:18.864951    2768 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\apiserver.key.0c40713b -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\apiserver.key
	I0612 13:00:18.866153    2768 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\proxy-client.key
	I0612 13:00:18.866153    2768 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\proxy-client.crt with IP's: []
	I0612 13:00:19.352496    2768 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\proxy-client.crt ...
	I0612 13:00:19.352496    2768 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\proxy-client.crt: {Name:mkb5a6150f3b10a08e3ac341906019e8e66f91e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:00:19.354716    2768 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\proxy-client.key ...
	I0612 13:00:19.354716    2768 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\proxy-client.key: {Name:mk1d78fd62b09d2c1cec2abd0e67d6058e19e1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:00:19.366057    2768 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0612 13:00:19.366582    2768 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0612 13:00:19.367125    2768 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0612 13:00:19.367329    2768 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0612 13:00:19.368667    2768 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 13:00:19.416444    2768 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 13:00:19.467144    2768 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 13:00:19.513722    2768 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0612 13:00:19.560335    2768 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0612 13:00:19.602543    2768 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0612 13:00:19.654299    2768 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 13:00:19.698928    2768 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0612 13:00:19.743602    2768 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 13:00:19.787483    2768 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 13:00:19.830086    2768 ssh_runner.go:195] Run: openssl version
	I0612 13:00:19.850464    2768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 13:00:19.884270    2768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 13:00:19.893419    2768 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:00 /usr/share/ca-certificates/minikubeCA.pem
	I0612 13:00:19.906699    2768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 13:00:19.925802    2768 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 13:00:19.959697    2768 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 13:00:19.967162    2768 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0612 13:00:19.967162    2768 kubeadm.go:391] StartCluster: {Name:addons-605800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1
ClusterName:addons-605800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.204.232 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 13:00:19.977714    2768 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0612 13:00:20.012624    2768 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0612 13:00:20.041734    2768 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 13:00:20.076305    2768 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 13:00:20.093829    2768 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 13:00:20.093829    2768 kubeadm.go:156] found existing configuration files:
	
	I0612 13:00:20.106062    2768 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 13:00:20.122665    2768 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 13:00:20.136498    2768 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 13:00:20.166573    2768 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 13:00:20.184748    2768 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 13:00:20.196549    2768 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 13:00:20.229455    2768 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 13:00:20.247721    2768 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 13:00:20.258540    2768 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 13:00:20.287333    2768 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 13:00:20.303763    2768 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 13:00:20.316443    2768 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 13:00:20.333843    2768 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 13:00:20.556022    2768 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 13:00:34.184309    2768 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0612 13:00:34.184309    2768 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 13:00:34.184309    2768 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 13:00:34.185053    2768 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 13:00:34.185276    2768 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 13:00:34.185276    2768 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 13:00:34.189444    2768 out.go:204]   - Generating certificates and keys ...
	I0612 13:00:34.189927    2768 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 13:00:34.190131    2768 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 13:00:34.190391    2768 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0612 13:00:34.190391    2768 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0612 13:00:34.190391    2768 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0612 13:00:34.190391    2768 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0612 13:00:34.190987    2768 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0612 13:00:34.191353    2768 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-605800 localhost] and IPs [172.23.204.232 127.0.0.1 ::1]
	I0612 13:00:34.191553    2768 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0612 13:00:34.191641    2768 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-605800 localhost] and IPs [172.23.204.232 127.0.0.1 ::1]
	I0612 13:00:34.191641    2768 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0612 13:00:34.191641    2768 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0612 13:00:34.191641    2768 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0612 13:00:34.192283    2768 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 13:00:34.192410    2768 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 13:00:34.192564    2768 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0612 13:00:34.192754    2768 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 13:00:34.192967    2768 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 13:00:34.193102    2768 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 13:00:34.193362    2768 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 13:00:34.193593    2768 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 13:00:34.196650    2768 out.go:204]   - Booting up control plane ...
	I0612 13:00:34.196650    2768 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 13:00:34.196650    2768 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 13:00:34.196650    2768 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 13:00:34.196650    2768 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 13:00:34.197845    2768 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 13:00:34.197845    2768 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 13:00:34.198141    2768 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0612 13:00:34.198396    2768 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0612 13:00:34.198431    2768 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.713527ms
	I0612 13:00:34.198431    2768 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0612 13:00:34.198431    2768 kubeadm.go:309] [api-check] The API server is healthy after 7.003412733s
	I0612 13:00:34.199057    2768 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0612 13:00:34.199057    2768 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0612 13:00:34.199057    2768 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0612 13:00:34.199751    2768 kubeadm.go:309] [mark-control-plane] Marking the node addons-605800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0612 13:00:34.199751    2768 kubeadm.go:309] [bootstrap-token] Using token: 2kxos2.v32p2kvznqg431qz
	I0612 13:00:34.202200    2768 out.go:204]   - Configuring RBAC rules ...
	I0612 13:00:34.202699    2768 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0612 13:00:34.202853    2768 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0612 13:00:34.203103    2768 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0612 13:00:34.203436    2768 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0612 13:00:34.203745    2768 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0612 13:00:34.204054    2768 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0612 13:00:34.204245    2768 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0612 13:00:34.204245    2768 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0612 13:00:34.204526    2768 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0612 13:00:34.204570    2768 kubeadm.go:309] 
	I0612 13:00:34.204717    2768 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0612 13:00:34.204717    2768 kubeadm.go:309] 
	I0612 13:00:34.204910    2768 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0612 13:00:34.204910    2768 kubeadm.go:309] 
	I0612 13:00:34.205133    2768 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0612 13:00:34.205311    2768 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0612 13:00:34.205311    2768 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0612 13:00:34.205503    2768 kubeadm.go:309] 
	I0612 13:00:34.205665    2768 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0612 13:00:34.205665    2768 kubeadm.go:309] 
	I0612 13:00:34.205828    2768 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0612 13:00:34.205828    2768 kubeadm.go:309] 
	I0612 13:00:34.205990    2768 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0612 13:00:34.206099    2768 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0612 13:00:34.206331    2768 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0612 13:00:34.206331    2768 kubeadm.go:309] 
	I0612 13:00:34.206416    2768 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0612 13:00:34.206416    2768 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0612 13:00:34.206416    2768 kubeadm.go:309] 
	I0612 13:00:34.206416    2768 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 2kxos2.v32p2kvznqg431qz \
	I0612 13:00:34.207313    2768 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:10c04e0412ada9d72a46398cbb6ecb6de5efcad2d747fb615b7e984406c55dc5 \
	I0612 13:00:34.207348    2768 kubeadm.go:309] 	--control-plane 
	I0612 13:00:34.207348    2768 kubeadm.go:309] 
	I0612 13:00:34.207541    2768 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0612 13:00:34.207541    2768 kubeadm.go:309] 
	I0612 13:00:34.207809    2768 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 2kxos2.v32p2kvznqg431qz \
	I0612 13:00:34.208018    2768 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:10c04e0412ada9d72a46398cbb6ecb6de5efcad2d747fb615b7e984406c55dc5 
	I0612 13:00:34.208018    2768 cni.go:84] Creating CNI manager for ""
	I0612 13:00:34.208018    2768 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0612 13:00:34.212350    2768 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 13:00:34.226350    2768 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 13:00:34.246613    2768 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 13:00:34.287631    2768 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0612 13:00:34.303608    2768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:00:34.304616    2768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-605800 minikube.k8s.io/updated_at=2024_06_12T13_00_34_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97 minikube.k8s.io/name=addons-605800 minikube.k8s.io/primary=true
	I0612 13:00:34.311796    2768 ops.go:34] apiserver oom_adj: -16
	I0612 13:00:34.435645    2768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:00:34.945234    2768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:00:35.445053    2768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:00:35.944905    2768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:00:36.446946    2768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:00:36.951125    2768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:00:37.436855    2768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:00:37.940284    2768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:00:38.441621    2768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:00:38.944401    2768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:00:39.445523    2768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:00:39.943868    2768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:00:40.435510    2768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:00:40.940106    2768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:00:41.445507    2768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:00:41.949208    2768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:00:42.437275    2768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:00:42.938711    2768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:00:43.442665    2768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:00:43.944042    2768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:00:44.436460    2768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:00:44.937683    2768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:00:45.442480    2768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:00:45.947854    2768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:00:46.450737    2768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:00:46.937445    2768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:00:47.446770    2768 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:00:47.596380    2768 kubeadm.go:1107] duration metric: took 13.3085739s to wait for elevateKubeSystemPrivileges
	W0612 13:00:47.596380    2768 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0612 13:00:47.596380    2768 kubeadm.go:393] duration metric: took 27.629132s to StartCluster
	I0612 13:00:47.596380    2768 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:00:47.596380    2768 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 13:00:47.597359    2768 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:00:47.598373    2768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0612 13:00:47.598373    2768 start.go:234] Will wait 6m0s for node &{Name: IP:172.23.204.232 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0612 13:00:47.601365    2768 out.go:177] * Verifying Kubernetes components...
	I0612 13:00:47.598373    2768 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0612 13:00:47.599368    2768 config.go:182] Loaded profile config "addons-605800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 13:00:47.607370    2768 addons.go:69] Setting default-storageclass=true in profile "addons-605800"
	I0612 13:00:47.607370    2768 addons.go:69] Setting ingress-dns=true in profile "addons-605800"
	I0612 13:00:47.607370    2768 addons.go:69] Setting registry=true in profile "addons-605800"
	I0612 13:00:47.607370    2768 addons.go:69] Setting cloud-spanner=true in profile "addons-605800"
	I0612 13:00:47.607370    2768 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-605800"
	I0612 13:00:47.607370    2768 addons.go:234] Setting addon cloud-spanner=true in "addons-605800"
	I0612 13:00:47.607370    2768 addons.go:69] Setting metrics-server=true in profile "addons-605800"
	I0612 13:00:47.607370    2768 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-605800"
	I0612 13:00:47.607370    2768 addons.go:69] Setting storage-provisioner=true in profile "addons-605800"
	I0612 13:00:47.607370    2768 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-605800"
	I0612 13:00:47.607370    2768 addons.go:69] Setting gcp-auth=true in profile "addons-605800"
	I0612 13:00:47.607370    2768 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-605800"
	I0612 13:00:47.607370    2768 host.go:66] Checking if "addons-605800" exists ...
	I0612 13:00:47.607370    2768 mustload.go:65] Loading cluster: addons-605800
	I0612 13:00:47.607370    2768 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-605800"
	I0612 13:00:47.607370    2768 addons.go:69] Setting volumesnapshots=true in profile "addons-605800"
	I0612 13:00:47.607370    2768 addons.go:234] Setting addon volumesnapshots=true in "addons-605800"
	I0612 13:00:47.607370    2768 host.go:66] Checking if "addons-605800" exists ...
	I0612 13:00:47.607370    2768 host.go:66] Checking if "addons-605800" exists ...
	I0612 13:00:47.607370    2768 addons.go:234] Setting addon registry=true in "addons-605800"
	I0612 13:00:47.607370    2768 addons.go:69] Setting inspektor-gadget=true in profile "addons-605800"
	I0612 13:00:47.608371    2768 host.go:66] Checking if "addons-605800" exists ...
	I0612 13:00:47.608371    2768 addons.go:234] Setting addon inspektor-gadget=true in "addons-605800"
	I0612 13:00:47.607370    2768 addons.go:69] Setting volcano=true in profile "addons-605800"
	I0612 13:00:47.608371    2768 addons.go:234] Setting addon volcano=true in "addons-605800"
	I0612 13:00:47.607370    2768 addons.go:234] Setting addon metrics-server=true in "addons-605800"
	I0612 13:00:47.607370    2768 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-605800"
	I0612 13:00:47.607370    2768 addons.go:234] Setting addon storage-provisioner=true in "addons-605800"
	I0612 13:00:47.607370    2768 addons.go:69] Setting yakd=true in profile "addons-605800"
	I0612 13:00:47.607370    2768 addons.go:234] Setting addon ingress-dns=true in "addons-605800"
	I0612 13:00:47.607370    2768 addons.go:69] Setting helm-tiller=true in profile "addons-605800"
	I0612 13:00:47.608371    2768 config.go:182] Loaded profile config "addons-605800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 13:00:47.608371    2768 host.go:66] Checking if "addons-605800" exists ...
	I0612 13:00:47.608371    2768 host.go:66] Checking if "addons-605800" exists ...
	I0612 13:00:47.609376    2768 host.go:66] Checking if "addons-605800" exists ...
	I0612 13:00:47.609376    2768 host.go:66] Checking if "addons-605800" exists ...
	I0612 13:00:47.609376    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:00:47.609376    2768 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-605800"
	I0612 13:00:47.609376    2768 host.go:66] Checking if "addons-605800" exists ...
	I0612 13:00:47.610370    2768 addons.go:234] Setting addon yakd=true in "addons-605800"
	I0612 13:00:47.610370    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:00:47.610370    2768 host.go:66] Checking if "addons-605800" exists ...
	I0612 13:00:47.610370    2768 addons.go:234] Setting addon helm-tiller=true in "addons-605800"
	I0612 13:00:47.610370    2768 host.go:66] Checking if "addons-605800" exists ...
	I0612 13:00:47.607370    2768 addons.go:69] Setting ingress=true in profile "addons-605800"
	I0612 13:00:47.610370    2768 addons.go:234] Setting addon ingress=true in "addons-605800"
	I0612 13:00:47.610370    2768 host.go:66] Checking if "addons-605800" exists ...
	I0612 13:00:47.608371    2768 host.go:66] Checking if "addons-605800" exists ...
	I0612 13:00:47.612372    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:00:47.615377    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:00:47.616375    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:00:47.616375    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:00:47.616375    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:00:47.618388    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:00:47.618388    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:00:47.619368    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:00:47.622363    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:00:47.623362    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:00:47.625235    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:00:47.625795    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:00:47.626116    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:00:47.627069    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:00:47.629503    2768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:00:48.678918    2768 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.0805423s)
	I0612 13:00:48.678918    2768 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.23.192.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0612 13:00:48.929755    2768 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.3002478s)
	I0612 13:00:48.951754    2768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 13:00:50.646844    2768 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.23.192.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.9673642s)
	I0612 13:00:50.646844    2768 start.go:946] {"host.minikube.internal": 172.23.192.1} host record injected into CoreDNS's ConfigMap
	I0612 13:00:50.653867    2768 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.7010988s)
	I0612 13:00:50.656129    2768 node_ready.go:35] waiting up to 6m0s for node "addons-605800" to be "Ready" ...
	I0612 13:00:50.718933    2768 node_ready.go:49] node "addons-605800" has status "Ready":"True"
	I0612 13:00:50.718933    2768 node_ready.go:38] duration metric: took 62.8041ms for node "addons-605800" to be "Ready" ...
	I0612 13:00:50.718933    2768 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 13:00:50.864858    2768 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9fb5n" in "kube-system" namespace to be "Ready" ...
	I0612 13:00:51.633862    2768 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-605800" context rescaled to 1 replicas
	I0612 13:00:53.158749    2768 pod_ready.go:102] pod "coredns-7db6d8ff4d-9fb5n" in "kube-system" namespace has status "Ready":"False"
	I0612 13:00:54.229769    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:00:54.229769    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:00:54.238760    2768 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0612 13:00:54.243762    2768 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0612 13:00:54.256765    2768 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0612 13:00:54.270766    2768 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0612 13:00:54.270766    2768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0612 13:00:54.271775    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:00:54.281776    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:00:54.282768    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:00:54.287540    2768 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0612 13:00:54.305550    2768 out.go:177]   - Using image docker.io/registry:2.8.3
	I0612 13:00:54.321188    2768 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0612 13:00:54.321188    2768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0612 13:00:54.321188    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:00:54.570673    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:00:54.570673    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:00:54.574682    2768 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 13:00:54.579681    2768 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 13:00:54.579681    2768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0612 13:00:54.579681    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:00:54.664397    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:00:54.665391    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:00:54.672381    2768 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0612 13:00:54.676873    2768 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0612 13:00:54.676952    2768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0612 13:00:54.677020    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:00:54.686382    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:00:54.686382    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:00:54.687392    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:00:54.687392    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:00:54.691587    2768 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0612 13:00:54.691390    2768 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-605800"
	I0612 13:00:54.698405    2768 host.go:66] Checking if "addons-605800" exists ...
	I0612 13:00:54.699405    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:00:54.701446    2768 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0612 13:00:54.701446    2768 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0612 13:00:54.701446    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:00:54.964392    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:00:54.965851    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:00:54.969793    2768 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0612 13:00:54.974790    2768 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0612 13:00:54.974790    2768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0612 13:00:54.974790    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:00:54.994776    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:00:54.994776    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:00:55.006778    2768 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.29.0
	I0612 13:00:55.014519    2768 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0612 13:00:55.014519    2768 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0612 13:00:55.014519    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:00:55.015525    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:00:55.015525    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:00:55.015525    2768 host.go:66] Checking if "addons-605800" exists ...
	I0612 13:00:55.013497    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:00:55.016497    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:00:55.016939    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:00:55.020531    2768 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0612 13:00:55.022523    2768 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0612 13:00:55.022523    2768 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0612 13:00:55.023581    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:00:55.017529    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:00:55.028689    2768 addons.go:234] Setting addon default-storageclass=true in "addons-605800"
	I0612 13:00:55.028689    2768 host.go:66] Checking if "addons-605800" exists ...
	I0612 13:00:55.029926    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:00:55.428497    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:00:55.428497    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:00:55.441492    2768 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.7.0
	I0612 13:00:55.442943    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:00:55.447315    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:00:55.462571    2768 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.7.0
	I0612 13:00:55.444550    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:00:55.458559    2768 pod_ready.go:102] pod "coredns-7db6d8ff4d-9fb5n" in "kube-system" namespace has status "Ready":"False"
	I0612 13:00:55.521549    2768 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0612 13:00:55.521549    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:00:55.529991    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:00:55.539474    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:00:55.539474    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:00:55.546461    2768 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0612 13:00:55.542475    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:00:55.549460    2768 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0612 13:00:55.552466    2768 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0612 13:00:55.552466    2768 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.7.0
	I0612 13:00:55.577619    2768 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0612 13:00:55.596621    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:00:55.577619    2768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0612 13:00:55.598623    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:00:55.634537    2768 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0612 13:00:55.641374    2768 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0612 13:00:55.641374    2768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0612 13:00:55.641374    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:00:55.638559    2768 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0612 13:00:55.699123    2768 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0612 13:00:55.699123    2768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (626760 bytes)
	I0612 13:00:55.699123    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:00:57.910990    2768 pod_ready.go:102] pod "coredns-7db6d8ff4d-9fb5n" in "kube-system" namespace has status "Ready":"False"
	I0612 13:00:58.848988    2768 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0612 13:00:58.868998    2768 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0612 13:00:58.880015    2768 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0612 13:00:58.906998    2768 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0612 13:00:58.980002    2768 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0612 13:00:59.066002    2768 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0612 13:00:59.156950    2768 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0612 13:00:59.218273    2768 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0612 13:00:59.218273    2768 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0612 13:00:59.218273    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:01:00.059041    2768 pod_ready.go:102] pod "coredns-7db6d8ff4d-9fb5n" in "kube-system" namespace has status "Ready":"False"
	I0612 13:01:00.374921    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:01:00.374921    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:00.374921    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 13:01:00.374921    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:01:00.374921    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:00.374921    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 13:01:00.631202    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:01:00.631202    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:00.631202    2768 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0612 13:01:00.631202    2768 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0612 13:01:00.631202    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:01:00.726191    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:01:00.727192    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:00.732909    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 13:01:00.839057    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:01:00.839057    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:00.839057    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 13:01:01.202645    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:01:01.202645    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:01.203645    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 13:01:01.259562    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:01:01.259562    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:01.260160    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 13:01:01.291083    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:01:01.291083    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:01.330094    2768 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0612 13:01:01.360228    2768 out.go:177]   - Using image docker.io/busybox:stable
	I0612 13:01:01.364245    2768 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0612 13:01:01.364245    2768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0612 13:01:01.364245    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:01:01.940740    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:01:01.941112    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:01.941112    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 13:01:02.199508    2768 pod_ready.go:102] pod "coredns-7db6d8ff4d-9fb5n" in "kube-system" namespace has status "Ready":"False"
	I0612 13:01:02.280947    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:01:02.281869    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:02.283863    2768 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0612 13:01:02.283863    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:01:02.798926    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:01:02.798926    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:02.798926    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 13:01:02.836917    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:01:02.836917    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:02.836917    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 13:01:02.865269    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:01:02.865269    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:02.865269    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 13:01:03.862212    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:01:03.862212    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:03.862212    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 13:01:04.051199    2768 pod_ready.go:92] pod "coredns-7db6d8ff4d-9fb5n" in "kube-system" namespace has status "Ready":"True"
	I0612 13:01:04.051199    2768 pod_ready.go:81] duration metric: took 13.1863002s for pod "coredns-7db6d8ff4d-9fb5n" in "kube-system" namespace to be "Ready" ...
	I0612 13:01:04.051199    2768 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-d6q6w" in "kube-system" namespace to be "Ready" ...
	I0612 13:01:04.085207    2768 pod_ready.go:92] pod "coredns-7db6d8ff4d-d6q6w" in "kube-system" namespace has status "Ready":"True"
	I0612 13:01:04.085207    2768 pod_ready.go:81] duration metric: took 34.0085ms for pod "coredns-7db6d8ff4d-d6q6w" in "kube-system" namespace to be "Ready" ...
	I0612 13:01:04.086207    2768 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-605800" in "kube-system" namespace to be "Ready" ...
	I0612 13:01:04.443964    2768 pod_ready.go:92] pod "etcd-addons-605800" in "kube-system" namespace has status "Ready":"True"
	I0612 13:01:04.443964    2768 pod_ready.go:81] duration metric: took 357.7558ms for pod "etcd-addons-605800" in "kube-system" namespace to be "Ready" ...
	I0612 13:01:04.443964    2768 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-605800" in "kube-system" namespace to be "Ready" ...
	I0612 13:01:04.503963    2768 pod_ready.go:92] pod "kube-apiserver-addons-605800" in "kube-system" namespace has status "Ready":"True"
	I0612 13:01:04.503963    2768 pod_ready.go:81] duration metric: took 59.9991ms for pod "kube-apiserver-addons-605800" in "kube-system" namespace to be "Ready" ...
	I0612 13:01:04.503963    2768 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-605800" in "kube-system" namespace to be "Ready" ...
	I0612 13:01:04.731749    2768 pod_ready.go:92] pod "kube-controller-manager-addons-605800" in "kube-system" namespace has status "Ready":"True"
	I0612 13:01:04.731749    2768 pod_ready.go:81] duration metric: took 227.785ms for pod "kube-controller-manager-addons-605800" in "kube-system" namespace to be "Ready" ...
	I0612 13:01:04.731749    2768 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-87ftx" in "kube-system" namespace to be "Ready" ...
	I0612 13:01:04.781802    2768 pod_ready.go:92] pod "kube-proxy-87ftx" in "kube-system" namespace has status "Ready":"True"
	I0612 13:01:04.781802    2768 pod_ready.go:81] duration metric: took 50.0536ms for pod "kube-proxy-87ftx" in "kube-system" namespace to be "Ready" ...
	I0612 13:01:04.781802    2768 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-605800" in "kube-system" namespace to be "Ready" ...
	I0612 13:01:04.854797    2768 pod_ready.go:92] pod "kube-scheduler-addons-605800" in "kube-system" namespace has status "Ready":"True"
	I0612 13:01:04.855811    2768 pod_ready.go:81] duration metric: took 74.008ms for pod "kube-scheduler-addons-605800" in "kube-system" namespace to be "Ready" ...
	I0612 13:01:04.855811    2768 pod_ready.go:38] duration metric: took 14.1368333s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 13:01:04.855811    2768 api_server.go:52] waiting for apiserver process to appear ...
	I0612 13:01:04.877805    2768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 13:01:05.001435    2768 api_server.go:72] duration metric: took 17.4030085s to wait for apiserver process to appear ...
	I0612 13:01:05.002447    2768 api_server.go:88] waiting for apiserver healthz status ...
	I0612 13:01:05.002447    2768 api_server.go:253] Checking apiserver healthz at https://172.23.204.232:8443/healthz ...
	I0612 13:01:05.021446    2768 api_server.go:279] https://172.23.204.232:8443/healthz returned 200:
	ok
	I0612 13:01:05.027434    2768 api_server.go:141] control plane version: v1.30.1
	I0612 13:01:05.027434    2768 api_server.go:131] duration metric: took 24.9875ms to wait for apiserver health ...
	I0612 13:01:05.027434    2768 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 13:01:05.055435    2768 system_pods.go:59] 7 kube-system pods found
	I0612 13:01:05.055435    2768 system_pods.go:61] "coredns-7db6d8ff4d-9fb5n" [bd4d9dc2-4579-4a16-8314-e5d0e34c19af] Running
	I0612 13:01:05.055435    2768 system_pods.go:61] "coredns-7db6d8ff4d-d6q6w" [3fd979e5-7fdb-4eb1-a2a3-de8916c1383c] Running
	I0612 13:01:05.055435    2768 system_pods.go:61] "etcd-addons-605800" [931eff2f-4646-4442-9435-8c2e2daa4e83] Running
	I0612 13:01:05.055435    2768 system_pods.go:61] "kube-apiserver-addons-605800" [8b3cc62e-6a93-41aa-bae7-bd446bfd3bb9] Running
	I0612 13:01:05.055435    2768 system_pods.go:61] "kube-controller-manager-addons-605800" [552a9dae-a42c-4d9d-b894-363ca0a6f130] Running
	I0612 13:01:05.055435    2768 system_pods.go:61] "kube-proxy-87ftx" [bbe77b39-d3a0-4c40-8f12-70f7c8e8eb78] Running
	I0612 13:01:05.055435    2768 system_pods.go:61] "kube-scheduler-addons-605800" [64336b6c-3bbb-4bf1-8c60-367941f9940f] Running
	I0612 13:01:05.055435    2768 system_pods.go:74] duration metric: took 28.0001ms to wait for pod list to return data ...
	I0612 13:01:05.055435    2768 default_sa.go:34] waiting for default service account to be created ...
	I0612 13:01:05.095568    2768 default_sa.go:45] found service account: "default"
	I0612 13:01:05.096568    2768 default_sa.go:55] duration metric: took 41.1333ms for default service account to be created ...
	I0612 13:01:05.096568    2768 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 13:01:05.298047    2768 system_pods.go:86] 7 kube-system pods found
	I0612 13:01:05.298047    2768 system_pods.go:89] "coredns-7db6d8ff4d-9fb5n" [bd4d9dc2-4579-4a16-8314-e5d0e34c19af] Running
	I0612 13:01:05.298047    2768 system_pods.go:89] "coredns-7db6d8ff4d-d6q6w" [3fd979e5-7fdb-4eb1-a2a3-de8916c1383c] Running
	I0612 13:01:05.298047    2768 system_pods.go:89] "etcd-addons-605800" [931eff2f-4646-4442-9435-8c2e2daa4e83] Running
	I0612 13:01:05.298047    2768 system_pods.go:89] "kube-apiserver-addons-605800" [8b3cc62e-6a93-41aa-bae7-bd446bfd3bb9] Running
	I0612 13:01:05.298047    2768 system_pods.go:89] "kube-controller-manager-addons-605800" [552a9dae-a42c-4d9d-b894-363ca0a6f130] Running
	I0612 13:01:05.298047    2768 system_pods.go:89] "kube-proxy-87ftx" [bbe77b39-d3a0-4c40-8f12-70f7c8e8eb78] Running
	I0612 13:01:05.298047    2768 system_pods.go:89] "kube-scheduler-addons-605800" [64336b6c-3bbb-4bf1-8c60-367941f9940f] Running
	I0612 13:01:05.298047    2768 system_pods.go:126] duration metric: took 201.4785ms to wait for k8s-apps to be running ...
	I0612 13:01:05.298047    2768 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 13:01:05.319042    2768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 13:01:05.407893    2768 system_svc.go:56] duration metric: took 109.8456ms WaitForService to wait for kubelet
	I0612 13:01:05.408894    2768 kubeadm.go:576] duration metric: took 17.8104656s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 13:01:05.408894    2768 node_conditions.go:102] verifying NodePressure condition ...
	I0612 13:01:05.523070    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 13:01:05.525273    2768 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 13:01:05.525273    2768 node_conditions.go:123] node cpu capacity is 2
	I0612 13:01:05.525273    2768 node_conditions.go:105] duration metric: took 116.3786ms to run NodePressure ...
	I0612 13:01:05.525273    2768 start.go:240] waiting for startup goroutines ...
	I0612 13:01:06.150366    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:01:06.150366    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:06.150366    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 13:01:06.926749    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:01:06.927737    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:06.927737    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 13:01:08.211325    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:01:08.211325    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:08.211325    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 13:01:08.286970    2768 main.go:141] libmachine: [stdout =====>] : 172.23.204.232
	
	I0612 13:01:08.288003    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:08.288003    2768 sshutil.go:53] new ssh client: &{IP:172.23.204.232 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-605800\id_rsa Username:docker}
	I0612 13:01:08.404497    2768 main.go:141] libmachine: [stdout =====>] : 172.23.204.232
	
	I0612 13:01:08.404497    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:08.404497    2768 sshutil.go:53] new ssh client: &{IP:172.23.204.232 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-605800\id_rsa Username:docker}
	I0612 13:01:08.511310    2768 main.go:141] libmachine: [stdout =====>] : 172.23.204.232
	
	I0612 13:01:08.511310    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:08.511614    2768 sshutil.go:53] new ssh client: &{IP:172.23.204.232 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-605800\id_rsa Username:docker}
	I0612 13:01:08.617195    2768 main.go:141] libmachine: [stdout =====>] : 172.23.204.232
	
	I0612 13:01:08.617195    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:08.617195    2768 sshutil.go:53] new ssh client: &{IP:172.23.204.232 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-605800\id_rsa Username:docker}
	I0612 13:01:08.746034    2768 main.go:141] libmachine: [stdout =====>] : 172.23.204.232
	
	I0612 13:01:08.746034    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:08.746034    2768 sshutil.go:53] new ssh client: &{IP:172.23.204.232 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-605800\id_rsa Username:docker}
	I0612 13:01:08.863984    2768 main.go:141] libmachine: [stdout =====>] : 172.23.204.232
	
	I0612 13:01:08.863984    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:08.863984    2768 sshutil.go:53] new ssh client: &{IP:172.23.204.232 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-605800\id_rsa Username:docker}
	I0612 13:01:08.865979    2768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0612 13:01:08.936423    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:01:08.936423    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:08.936423    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 13:01:08.945028    2768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 13:01:09.251754    2768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0612 13:01:09.458309    2768 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0612 13:01:09.459300    2768 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0612 13:01:09.564848    2768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0612 13:01:09.815829    2768 main.go:141] libmachine: [stdout =====>] : 172.23.204.232
	
	I0612 13:01:09.815829    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:09.816561    2768 sshutil.go:53] new ssh client: &{IP:172.23.204.232 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-605800\id_rsa Username:docker}
	I0612 13:01:09.906572    2768 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0612 13:01:09.906572    2768 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0612 13:01:09.957619    2768 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0612 13:01:09.957619    2768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0612 13:01:09.995077    2768 main.go:141] libmachine: [stdout =====>] : 172.23.204.232
	
	I0612 13:01:09.995077    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:09.995077    2768 sshutil.go:53] new ssh client: &{IP:172.23.204.232 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-605800\id_rsa Username:docker}
	I0612 13:01:10.076617    2768 main.go:141] libmachine: [stdout =====>] : 172.23.204.232
	
	I0612 13:01:10.076687    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:10.076904    2768 sshutil.go:53] new ssh client: &{IP:172.23.204.232 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-605800\id_rsa Username:docker}
	I0612 13:01:10.136484    2768 main.go:141] libmachine: [stdout =====>] : 172.23.204.232
	
	I0612 13:01:10.136484    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:10.136756    2768 sshutil.go:53] new ssh client: &{IP:172.23.204.232 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-605800\id_rsa Username:docker}
	I0612 13:01:10.157792    2768 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0612 13:01:10.157792    2768 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0612 13:01:10.199401    2768 main.go:141] libmachine: [stdout =====>] : 172.23.204.232
	
	I0612 13:01:10.199488    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:10.199534    2768 sshutil.go:53] new ssh client: &{IP:172.23.204.232 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-605800\id_rsa Username:docker}
	I0612 13:01:10.308499    2768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0612 13:01:10.455368    2768 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0612 13:01:10.455455    2768 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0612 13:01:10.608001    2768 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0612 13:01:10.608001    2768 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0612 13:01:10.610008    2768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0612 13:01:10.733020    2768 main.go:141] libmachine: [stdout =====>] : 172.23.204.232
	
	I0612 13:01:10.733447    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:10.733673    2768 sshutil.go:53] new ssh client: &{IP:172.23.204.232 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-605800\id_rsa Username:docker}
	I0612 13:01:10.742522    2768 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0612 13:01:10.742522    2768 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0612 13:01:10.845801    2768 main.go:141] libmachine: [stdout =====>] : 172.23.204.232
	
	I0612 13:01:10.845944    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:10.846012    2768 sshutil.go:53] new ssh client: &{IP:172.23.204.232 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-605800\id_rsa Username:docker}
	I0612 13:01:10.875103    2768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0612 13:01:10.894104    2768 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0612 13:01:10.894104    2768 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0612 13:01:10.901892    2768 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0612 13:01:10.901985    2768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0612 13:01:10.925887    2768 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0612 13:01:10.925993    2768 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0612 13:01:10.985554    2768 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0612 13:01:10.985554    2768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0612 13:01:11.154436    2768 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0612 13:01:11.154436    2768 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0612 13:01:11.167923    2768 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0612 13:01:11.167923    2768 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0612 13:01:11.212816    2768 main.go:141] libmachine: [stdout =====>] : 172.23.204.232
	
	I0612 13:01:11.212816    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:11.212816    2768 sshutil.go:53] new ssh client: &{IP:172.23.204.232 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-605800\id_rsa Username:docker}
	I0612 13:01:11.251500    2768 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0612 13:01:11.251500    2768 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0612 13:01:11.302700    2768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0612 13:01:11.432320    2768 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0612 13:01:11.432320    2768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0612 13:01:11.462612    2768 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 13:01:11.462612    2768 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0612 13:01:11.590848    2768 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0612 13:01:11.590848    2768 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0612 13:01:11.597849    2768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0612 13:01:11.623857    2768 main.go:141] libmachine: [stdout =====>] : 172.23.204.232
	
	I0612 13:01:11.623857    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:11.623857    2768 sshutil.go:53] new ssh client: &{IP:172.23.204.232 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-605800\id_rsa Username:docker}
	I0612 13:01:11.660851    2768 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0612 13:01:11.660851    2768 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0612 13:01:11.744729    2768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0612 13:01:11.812268    2768 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0612 13:01:11.812435    2768 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0612 13:01:11.847288    2768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0612 13:01:11.975832    2768 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0612 13:01:11.975898    2768 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0612 13:01:12.165826    2768 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0612 13:01:12.165826    2768 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0612 13:01:12.173475    2768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0612 13:01:12.272359    2768 main.go:141] libmachine: [stdout =====>] : 172.23.204.232
	
	I0612 13:01:12.272359    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:12.272359    2768 sshutil.go:53] new ssh client: &{IP:172.23.204.232 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-605800\id_rsa Username:docker}
	I0612 13:01:12.483498    2768 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0612 13:01:12.483603    2768 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0612 13:01:12.617403    2768 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0612 13:01:12.617403    2768 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0612 13:01:12.723864    2768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0612 13:01:13.076779    2768 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0612 13:01:13.077171    2768 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0612 13:01:13.387136    2768 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0612 13:01:13.387136    2768 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0612 13:01:13.617875    2768 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0612 13:01:13.655810    2768 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0612 13:01:13.655885    2768 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0612 13:01:14.328398    2768 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0612 13:01:14.328398    2768 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0612 13:01:14.848428    2768 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0612 13:01:14.848428    2768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0612 13:01:15.066003    2768 addons.go:234] Setting addon gcp-auth=true in "addons-605800"
	I0612 13:01:15.066171    2768 host.go:66] Checking if "addons-605800" exists ...
	I0612 13:01:15.067726    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:01:15.120483    2768 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0612 13:01:15.120639    2768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0612 13:01:15.724502    2768 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0612 13:01:15.724502    2768 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0612 13:01:15.899082    2768 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0612 13:01:15.899148    2768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0612 13:01:16.003820    2768 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0612 13:01:16.003820    2768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0612 13:01:16.041045    2768 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0612 13:01:16.041128    2768 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0612 13:01:16.354544    2768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0612 13:01:16.606328    2768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0612 13:01:17.562741    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:01:17.562769    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:17.578011    2768 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0612 13:01:17.578011    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-605800 ).state
	I0612 13:01:19.826566    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:01:19.826566    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:19.826566    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-605800 ).networkadapters[0]).ipaddresses[0]
	I0612 13:01:22.614353    2768 main.go:141] libmachine: [stdout =====>] : 172.23.204.232
	
	I0612 13:01:22.614707    2768 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:01:22.614781    2768 sshutil.go:53] new ssh client: &{IP:172.23.204.232 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-605800\id_rsa Username:docker}
	I0612 13:01:23.247408    2768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (14.3805028s)
	I0612 13:01:23.247503    2768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (14.302336s)
	I0612 13:01:23.247503    2768 addons.go:475] Verifying addon ingress=true in "addons-605800"
	I0612 13:01:23.247634    2768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (13.9957696s)
	I0612 13:01:23.247680    2768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (13.6827891s)
	I0612 13:01:23.247787    2768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (12.9392479s)
	I0612 13:01:23.247860    2768 addons.go:475] Verifying addon registry=true in "addons-605800"
	I0612 13:01:23.252234    2768 out.go:177] * Verifying ingress addon...
	I0612 13:01:23.254513    2768 out.go:177] * Verifying registry addon...
	I0612 13:01:23.262752    2768 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0612 13:01:23.265739    2768 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0612 13:01:23.295191    2768 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0612 13:01:23.295306    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:23.295191    2768 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0612 13:01:23.295306    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:23.784686    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:23.815837    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:24.280950    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:24.309227    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:24.863917    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:24.886591    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:25.312632    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:25.312632    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:25.893992    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:25.894514    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:26.357000    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:26.357598    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:26.782512    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:26.809286    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:27.141477    2768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (16.5314179s)
	I0612 13:01:27.141477    2768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (16.2663232s)
	I0612 13:01:27.141477    2768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (15.8386783s)
	I0612 13:01:27.141477    2768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (15.5435798s)
	W0612 13:01:27.141477    2768 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0612 13:01:27.142021    2768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (15.3972443s)
	I0612 13:01:27.142021    2768 retry.go:31] will retry after 141.994781ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0612 13:01:27.144729    2768 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-605800 service yakd-dashboard -n yakd-dashboard
	
	I0612 13:01:27.142299    2768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (15.2949185s)
	I0612 13:01:27.142299    2768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (14.9687776s)
	I0612 13:01:27.142478    2768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (14.41857s)
	I0612 13:01:27.142478    2768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (10.7870878s)
	I0612 13:01:27.144729    2768 addons.go:475] Verifying addon metrics-server=true in "addons-605800"
	W0612 13:01:27.188216    2768 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0612 13:01:27.295490    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:27.297492    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:27.298511    2768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0612 13:01:27.824491    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:27.860504    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:28.092853    2768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (11.4864193s)
	I0612 13:01:28.092969    2768 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-605800"
	I0612 13:01:28.092969    2768 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (10.5149247s)
	I0612 13:01:28.099526    2768 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0612 13:01:28.103900    2768 out.go:177] * Verifying csi-hostpath-driver addon...
	I0612 13:01:28.108858    2768 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0612 13:01:28.112876    2768 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0612 13:01:28.112876    2768 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0612 13:01:28.108858    2768 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0612 13:01:28.174989    2768 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0612 13:01:28.174989    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:28.283593    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:28.295798    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:28.301507    2768 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0612 13:01:28.301602    2768 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0612 13:01:28.474064    2768 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0612 13:01:28.474064    2768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0612 13:01:28.587590    2768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0612 13:01:28.657654    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:28.778980    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:28.783832    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:29.126933    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:29.284700    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:29.286847    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:29.635666    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:29.778983    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:29.779386    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:29.978401    2768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.6768941s)
	I0612 13:01:30.126977    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:30.279887    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:30.292975    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:30.499130    2768 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.9115336s)
	I0612 13:01:30.509640    2768 addons.go:475] Verifying addon gcp-auth=true in "addons-605800"
	I0612 13:01:30.512638    2768 out.go:177] * Verifying gcp-auth addon...
	I0612 13:01:30.520511    2768 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0612 13:01:30.529904    2768 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0612 13:01:30.628713    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:30.774804    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:30.775083    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:31.123168    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:31.281428    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:31.281925    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:31.630116    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:31.769743    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:31.786385    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:32.126739    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:32.284893    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:32.286492    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:32.632910    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:32.772413    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:32.772569    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:33.137178    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:33.277247    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:33.278237    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:33.624847    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:33.779215    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:33.780667    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:34.129417    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:34.285515    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:34.287847    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:34.633213    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:34.774763    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:34.776740    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:35.125730    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:35.283773    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:35.286619    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:35.631306    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:35.772620    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:35.772620    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:36.128959    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:36.310592    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:36.312285    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:36.633638    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:36.776759    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:36.777564    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:37.125726    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:37.282075    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:37.282376    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:37.630485    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:37.773702    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:37.776061    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:38.137380    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:38.280466    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:38.281359    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:38.625846    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:38.784036    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:38.784036    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:39.135244    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:39.275903    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:39.277942    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:39.625124    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:39.780685    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:39.780951    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:40.127425    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:40.269188    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:40.274554    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:40.628477    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:40.769172    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:40.780285    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:41.136239    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:41.276740    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:41.279304    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:41.628359    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:41.784281    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:41.784904    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:42.137034    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:42.280857    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:42.281040    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:42.631778    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:42.788312    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:42.789929    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:43.528187    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:43.531845    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:43.535385    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:43.628968    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:43.783366    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:43.785211    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:44.134878    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:44.270265    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:44.276028    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:44.639906    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:44.779345    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:44.779486    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:45.129915    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:45.271357    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:45.278197    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:45.635127    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:45.776068    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:45.781397    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:46.130493    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:46.284983    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:46.288072    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:46.696780    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:46.776067    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:46.783217    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:47.136705    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:47.282859    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:47.282859    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:47.628671    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:47.770225    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:47.775660    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:48.135692    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:48.276427    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:48.276427    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:48.622285    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:48.781963    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:48.783275    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:49.133535    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:49.276244    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:49.276244    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:49.625071    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:49.782458    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:49.784360    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:50.134862    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:50.277313    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:50.277806    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:50.630658    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:50.769242    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:50.774421    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:51.474471    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:51.474471    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:51.482480    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:51.658708    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:51.769954    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:51.775266    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:52.133468    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:52.284741    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:52.286984    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:52.635832    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:52.781796    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:52.793553    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:53.164129    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:53.321344    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:53.324122    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:53.635640    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:53.795547    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:53.795547    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:54.134448    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:54.278560    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:54.278560    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:54.645960    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:54.783163    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:54.785998    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:55.133054    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:55.277666    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:55.277666    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:55.626752    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:55.786331    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:55.786648    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:56.135563    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:56.276577    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:56.276970    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:56.624275    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:56.781845    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:56.785816    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:57.131952    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:57.282212    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:57.283306    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:57.623336    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:57.782567    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:57.785253    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:58.123857    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:58.270593    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:58.275672    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:58.622450    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:58.783410    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:58.784410    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:59.132361    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:59.274486    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:59.274694    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:01:59.621716    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:01:59.780770    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:01:59.782498    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:00.127762    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:00.284188    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:00.285539    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:00.634275    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:00.778355    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:00.781764    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:01.126267    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:01.285895    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:01.288552    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:01.622995    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:01.779657    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:01.780655    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:02.136167    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:02.277211    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:02.277241    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:02.809969    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:02.812350    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:02.816309    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:03.126205    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:03.283377    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:03.285623    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:03.632189    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:03.770473    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:03.775042    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:04.124167    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:04.278737    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:04.281158    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:04.626252    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:04.782932    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:04.783790    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:05.263196    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:05.268195    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:05.273692    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:05.632322    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:05.815320    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:05.815457    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:06.131136    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:06.273473    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:06.275729    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:06.623638    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:06.781599    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:06.781702    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:07.131729    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:07.288433    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:07.291724    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:07.634931    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:07.778940    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:07.778940    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:08.418797    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:08.422569    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:08.422936    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:08.627806    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:08.784442    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:08.787913    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:09.127347    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:09.280775    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:09.281165    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:09.624764    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:09.781866    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:09.783540    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:10.127614    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:10.281590    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:10.283600    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:10.628270    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:10.783187    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:10.783239    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:11.135205    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:11.271129    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:11.276170    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:11.635202    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:11.778680    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:11.778680    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:12.125071    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:12.280151    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:12.283105    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:12.630583    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:12.787662    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:12.789896    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:13.256261    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:13.275275    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:13.276061    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:13.623275    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:13.780621    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:13.780685    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:14.130771    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:14.276182    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:14.280663    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:14.644580    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:14.783659    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:14.786146    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:15.131222    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:15.270377    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:15.275635    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:15.640776    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:15.779108    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:15.779145    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:16.126721    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:16.284384    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:16.284471    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:16.633690    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:16.776294    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:16.776294    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:17.125810    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:17.284096    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:17.284096    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:17.634498    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:17.774334    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:17.781550    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:18.122756    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:18.281732    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:18.285182    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:18.629126    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:18.774907    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:18.780307    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:19.134868    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:19.275182    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:19.278765    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:19.624087    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:19.781982    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:19.782142    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:20.128706    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:20.271077    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:20.275237    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:20.635024    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:20.777173    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:20.778166    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:21.130999    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:21.270845    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:21.275593    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:21.620785    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:21.779813    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:21.779813    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:22.132305    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:22.270993    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:22.275065    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:22.629766    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:22.874266    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:22.875554    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:23.126032    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:23.288113    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:23.289094    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:23.625252    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:23.774136    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:23.774136    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:24.124293    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:24.282264    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:24.284551    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:24.629887    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:24.771930    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:24.778063    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:25.122177    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:25.573906    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:25.575976    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:25.665497    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:25.780057    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:25.781088    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:26.128283    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:26.459035    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:26.463008    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:26.635341    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:26.772742    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:26.773112    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:27.132912    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:27.276357    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:27.277826    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0612 13:02:27.632059    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:27.787570    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:27.789579    2768 kapi.go:107] duration metric: took 1m4.52364s to wait for kubernetes.io/minikube-addons=registry ...
	I0612 13:02:28.134232    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:28.275027    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:28.628130    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:28.780425    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:29.133013    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:29.271989    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:29.625874    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:29.782319    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:30.132533    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:30.275980    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:30.630350    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:31.117770    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:31.125488    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:31.273602    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:31.633466    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:31.770065    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:32.124512    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:32.284119    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:32.634785    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:32.788505    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:33.122588    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:33.277734    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:33.630505    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:33.772355    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:34.125640    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:34.281571    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:34.646800    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:34.768988    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:35.126732    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:35.282214    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:35.632875    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:35.785456    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:36.135858    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:36.275336    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:36.625678    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:36.781520    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:37.131565    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:37.273124    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:37.623816    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:37.780414    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:38.309190    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:38.314722    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:38.623134    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:38.780824    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:39.132843    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:39.271099    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:39.623256    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:39.911488    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:40.122718    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:40.300406    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:40.637031    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:40.774323    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:41.140738    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:41.276386    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:41.651402    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:41.783898    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:42.124863    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:42.280019    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:42.632706    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:42.772192    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:43.136910    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:43.276339    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:43.624845    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:43.779969    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:44.129675    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:44.272407    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:44.625272    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:44.782641    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:45.133472    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:45.274204    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:45.770848    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:45.777170    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:46.129197    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:46.334310    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:46.633879    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:46.774736    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:47.123079    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:47.278804    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:47.660495    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:47.797682    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:48.137801    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:48.362923    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:48.624565    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:48.784276    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:49.133081    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:49.274691    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:49.626002    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:49.779561    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:50.128339    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:50.283215    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:50.634548    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:50.777129    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:51.130016    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:51.282596    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:51.636546    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:51.776884    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:52.126033    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:52.280135    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:52.816968    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:52.817016    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:53.123926    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:53.276887    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:53.628061    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:53.785437    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:54.135597    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:54.274993    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:54.920913    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:54.925183    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:55.259058    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:55.275708    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:55.674899    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:55.882694    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:56.121828    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:56.279057    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:56.630174    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:56.785223    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:57.135281    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:57.275509    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:57.627778    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:57.783608    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:58.122310    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:58.313155    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:58.626103    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:58.785651    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:59.130461    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:59.279079    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:02:59.625604    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:02:59.778469    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:00.129427    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:00.270733    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:00.635632    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:00.775464    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:01.131275    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:01.283077    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:01.634373    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:01.776530    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:02.127384    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:02.456589    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:02.728327    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:02.858542    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:03.135091    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:03.271321    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:03.621349    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:03.780349    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:04.129283    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:04.271467    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:04.624678    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:04.782163    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:05.136300    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:05.275509    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:05.625238    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:05.780509    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:06.131006    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:06.270460    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:06.621502    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:06.779345    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:07.129225    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:07.283245    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:07.830807    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:07.838628    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:08.420709    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:08.423090    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:08.633666    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:08.775553    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:09.125161    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:09.280643    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:09.629802    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:09.770338    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:10.139396    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:10.276584    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:10.632206    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:10.780489    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:11.131832    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:11.271678    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:11.635241    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:11.774669    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:12.128374    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:12.283593    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:12.636491    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:12.777571    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:13.125006    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:13.353132    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:13.639835    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:13.770198    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:14.126194    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:14.279202    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:14.630407    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:14.778159    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:15.134796    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:15.278377    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:15.630773    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:15.771828    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:16.123518    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:16.279206    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:16.633569    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:16.772775    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:17.178143    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:17.278999    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:17.624278    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:17.780174    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:18.140351    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:18.291335    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:18.639566    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:18.777752    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:19.128347    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:19.284388    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:19.633467    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:19.776795    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:20.125597    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:20.278438    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:20.623016    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:20.783938    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:21.130161    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:21.283480    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:21.630941    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:21.770872    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:22.122703    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:22.278907    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:22.628648    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:22.784009    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:23.134862    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:23.572306    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:23.660369    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:23.774163    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:24.127530    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:24.280858    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:24.632706    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:24.773120    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:25.134979    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:25.277455    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:25.636182    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:25.788182    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:26.137471    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:26.522488    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:26.633042    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:26.780213    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:27.130480    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:27.282968    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:27.629819    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:27.771626    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:28.137646    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:28.276439    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:28.628243    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:28.770262    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:29.125462    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:29.279983    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:29.633889    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:29.859939    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:30.136079    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:30.284531    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:30.624878    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:30.780414    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:31.135950    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:31.274196    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:31.627433    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:31.786451    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:32.131656    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:32.275397    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:32.625226    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:32.781215    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:33.131213    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:33.270027    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:33.623086    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:33.777396    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:34.129699    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:34.282970    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:34.636734    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:34.773362    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:35.129595    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:35.280321    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:35.651300    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:35.772342    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:36.135221    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:36.277587    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:36.625495    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:36.780359    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:37.271071    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:37.274859    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:37.638754    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:37.775432    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:38.129962    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:38.285144    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:38.635371    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:38.779876    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:39.129685    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:39.275635    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:39.623057    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:39.780217    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:40.136780    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:40.274329    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:40.628318    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:40.806995    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:41.132371    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:41.274947    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:41.624686    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:41.777601    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:42.127829    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:42.281743    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:42.631991    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:42.769547    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:43.122051    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:43.288050    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:43.637031    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:43.770610    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:44.124508    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:44.277976    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:44.629711    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:44.770732    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:45.135986    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:45.276988    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:45.633121    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:45.881274    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:46.140042    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:46.283719    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:46.630332    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:46.857945    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:47.140341    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:47.278038    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:47.628301    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:47.783556    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:48.134474    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:48.275303    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:48.631181    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:48.784875    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:49.137948    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:49.275443    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:49.628964    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:49.783792    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:50.138811    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:50.278376    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:50.628939    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:50.769986    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:51.140592    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:51.278500    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:51.636418    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:51.778709    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:52.135392    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:52.276363    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:52.620374    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:52.776786    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:53.127100    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:53.276494    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:53.622808    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:53.784776    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:54.129306    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:54.285155    2768 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0612 13:03:54.635555    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:54.774916    2768 kapi.go:107] duration metric: took 2m31.5116953s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0612 13:03:55.125337    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:55.636431    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:56.323826    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:56.637788    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:57.125540    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:57.635890    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:58.127871    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:58.634217    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:59.136524    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:03:59.636613    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:04:00.123875    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:04:00.630882    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:04:01.128318    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:04:01.635727    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:04:02.133392    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:04:02.622331    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0612 13:04:03.137151    2768 kapi.go:107] duration metric: took 2m35.0278142s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0612 13:04:14.532036    2768 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0612 13:04:14.532564    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:15.036344    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:15.537648    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:16.039822    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:16.539946    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:17.044428    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:17.540883    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:18.030417    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:18.533266    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:19.031036    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:19.531963    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:20.032628    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:20.533450    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:21.030415    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:21.531794    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:22.031098    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:22.530703    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:23.028712    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:23.539393    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:24.030859    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:24.529080    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:25.029586    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:25.530866    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:26.032314    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:26.532666    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:27.034327    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:27.536803    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:28.029174    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:28.540628    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:29.029821    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:29.535749    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:30.042674    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:30.535271    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:31.037746    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:31.539555    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:32.037082    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:32.536827    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:33.037915    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:33.539419    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:34.043201    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:34.536506    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:35.032128    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:35.537009    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:36.042628    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:36.539289    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:37.039856    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:37.530965    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:38.032017    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:38.535805    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:39.035167    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:39.533928    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:40.036321    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:40.536912    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:41.034673    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:41.536057    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:42.035392    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:42.536056    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:43.036090    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:43.531666    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:44.037121    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:44.534340    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:45.031631    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:45.532074    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:46.033481    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:46.537806    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:47.040136    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:47.527624    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:48.035420    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:48.532207    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:49.040951    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:49.533287    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:50.360076    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:50.540898    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:51.029419    2768 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0612 13:04:51.531335    2768 kapi.go:107] duration metric: took 3m21.0102036s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0612 13:04:51.537095    2768 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-605800 cluster.
	I0612 13:04:51.539993    2768 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0612 13:04:51.542023    2768 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0612 13:04:51.545957    2768 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, cloud-spanner, volcano, nvidia-device-plugin, helm-tiller, inspektor-gadget, yakd, metrics-server, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0612 13:04:51.550005    2768 addons.go:510] duration metric: took 4m3.9508788s for enable addons: enabled=[storage-provisioner ingress-dns cloud-spanner volcano nvidia-device-plugin helm-tiller inspektor-gadget yakd metrics-server default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0612 13:04:51.550005    2768 start.go:245] waiting for cluster config update ...
	I0612 13:04:51.550005    2768 start.go:254] writing updated cluster config ...
	I0612 13:04:51.560945    2768 ssh_runner.go:195] Run: rm -f paused
	I0612 13:04:51.830835    2768 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0612 13:04:51.835539    2768 out.go:177] * Done! kubectl is now configured to use "addons-605800" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jun 12 20:05:26 addons-605800 dockerd[1329]: time="2024-06-12T20:05:26.371591602Z" level=info msg="ignoring event" container=47cec2bcef95cdfe54872d62f4c1dfcde7eecba211418ef7870cb0a7ace2da67 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 12 20:05:26 addons-605800 dockerd[1329]: time="2024-06-12T20:05:26.606857573Z" level=info msg="ignoring event" container=e8ca0fe6ecd091797e09c3df070f17edf25b7151a98f9991e9cd4307030b72c1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 12 20:05:26 addons-605800 dockerd[1335]: time="2024-06-12T20:05:26.608151272Z" level=info msg="shim disconnected" id=e8ca0fe6ecd091797e09c3df070f17edf25b7151a98f9991e9cd4307030b72c1 namespace=moby
	Jun 12 20:05:26 addons-605800 dockerd[1335]: time="2024-06-12T20:05:26.608930872Z" level=warning msg="cleaning up after shim disconnected" id=e8ca0fe6ecd091797e09c3df070f17edf25b7151a98f9991e9cd4307030b72c1 namespace=moby
	Jun 12 20:05:26 addons-605800 dockerd[1335]: time="2024-06-12T20:05:26.609013972Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 12 20:05:29 addons-605800 dockerd[1335]: time="2024-06-12T20:05:29.164366407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 12 20:05:29 addons-605800 dockerd[1335]: time="2024-06-12T20:05:29.164678306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 12 20:05:29 addons-605800 dockerd[1335]: time="2024-06-12T20:05:29.164695306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:05:29 addons-605800 dockerd[1335]: time="2024-06-12T20:05:29.165746004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:05:29 addons-605800 cri-dockerd[1235]: time="2024-06-12T20:05:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/33440c6d0eeead8f9530292daf89862c3b977484d3c966928440b7747a4f6303/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 12 20:05:31 addons-605800 cri-dockerd[1235]: time="2024-06-12T20:05:31Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
	Jun 12 20:05:31 addons-605800 dockerd[1335]: time="2024-06-12T20:05:31.678654297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 12 20:05:31 addons-605800 dockerd[1335]: time="2024-06-12T20:05:31.678758097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 12 20:05:31 addons-605800 dockerd[1335]: time="2024-06-12T20:05:31.678780597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:05:31 addons-605800 dockerd[1335]: time="2024-06-12T20:05:31.679466995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:05:37 addons-605800 cri-dockerd[1235]: time="2024-06-12T20:05:37Z" level=error msg="error getting RW layer size for container ID '225100dd5286d56bf517555c52e1ff34bff4742eb45928cc262cde7ecf5c7a8b': Error response from daemon: No such container: 225100dd5286d56bf517555c52e1ff34bff4742eb45928cc262cde7ecf5c7a8b"
	Jun 12 20:05:37 addons-605800 cri-dockerd[1235]: time="2024-06-12T20:05:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID '225100dd5286d56bf517555c52e1ff34bff4742eb45928cc262cde7ecf5c7a8b'"
	Jun 12 20:05:37 addons-605800 cri-dockerd[1235]: time="2024-06-12T20:05:37Z" level=error msg="error getting RW layer size for container ID 'c9dc1d2b2898ba8b133286aa2deba508b8ed26e055967aa78e5a7f1dca65fba3': Error response from daemon: No such container: c9dc1d2b2898ba8b133286aa2deba508b8ed26e055967aa78e5a7f1dca65fba3"
	Jun 12 20:05:37 addons-605800 cri-dockerd[1235]: time="2024-06-12T20:05:37Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'c9dc1d2b2898ba8b133286aa2deba508b8ed26e055967aa78e5a7f1dca65fba3'"
	Jun 12 20:05:41 addons-605800 cri-dockerd[1235]: time="2024-06-12T20:05:41Z" level=info msg="Pulling image docker.io/nginx:latest: 5529e0792248: Extracting [=============================>                     ]  24.71MB/41.83MB"
	Jun 12 20:05:44 addons-605800 cri-dockerd[1235]: time="2024-06-12T20:05:44Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Downloaded newer image for nginx:latest"
	Jun 12 20:05:45 addons-605800 dockerd[1335]: time="2024-06-12T20:05:45.849922080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 12 20:05:45 addons-605800 dockerd[1335]: time="2024-06-12T20:05:45.851054978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 12 20:05:45 addons-605800 dockerd[1335]: time="2024-06-12T20:05:45.851085278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:05:45 addons-605800 dockerd[1335]: time="2024-06-12T20:05:45.851511977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	9f7603f33c4f8       nginx@sha256:0f04e4f646a3f14bf31d8bc8d885b6c951fdcf42589d06845f64d18aec6a3c4d                                                                3 seconds ago        Running             task-pv-container                        0                   33440c6d0eeea       task-pv-pod
	5a6f8f116df2a       nginx@sha256:69f8c2c72671490607f52122be2af27d4fc09657ff57e42045801aa93d2090f7                                                                16 seconds ago       Running             nginx                                    0                   1146fe30b28f8       nginx
	fcb89263a36ca       ghcr.io/headlamp-k8s/headlamp@sha256:c48d3702275225be765218b1caffea7fc514ed31bc11533af71ffd1ee6f2fde1                                        28 seconds ago       Running             headlamp                                 0                   b96c50bca8fa8       headlamp-7fc69f7444-7f6zf
	2822940267e01       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 58 seconds ago       Running             gcp-auth                                 0                   126a6fb6c0227       gcp-auth-5db96cd9b4-d25wr
	326d437579d8a       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   a359ba78ad0ab       csi-hostpathplugin-9r4zm
	c3f42de3f9845       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          About a minute ago   Running             csi-provisioner                          0                   a359ba78ad0ab       csi-hostpathplugin-9r4zm
	c2f032075d9d4       registry.k8s.io/ingress-nginx/controller@sha256:e24f39d3eed6bcc239a56f20098878845f62baa34b9f2be2fd2c38ce9fb0f29e                             About a minute ago   Running             controller                               0                   9cd308436f35b       ingress-nginx-controller-768f948f8f-tm2g7
	e7228b5117026       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            2 minutes ago        Running             liveness-probe                           0                   a359ba78ad0ab       csi-hostpathplugin-9r4zm
	8ee3fdcbee255       fd19c461b125e                                                                                                                                2 minutes ago        Running             admission                                0                   0b7f056000554       volcano-admission-7b497cf95b-t7tlr
	138d42f18f8c7       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           2 minutes ago        Running             hostpath                                 0                   a359ba78ad0ab       csi-hostpathplugin-9r4zm
	a66d7a0ad35b1       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                2 minutes ago        Running             node-driver-registrar                    0                   a359ba78ad0ab       csi-hostpathplugin-9r4zm
	7d5b3276ccbe7       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   2 minutes ago        Running             csi-external-health-monitor-controller   0                   a359ba78ad0ab       csi-hostpathplugin-9r4zm
	e5077b92fbeff       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              2 minutes ago        Running             csi-resizer                              0                   f7bfee440ef6f       csi-hostpath-resizer-0
	a6f8040d862ee       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             2 minutes ago        Running             csi-attacher                             0                   81ccacd869b58       csi-hostpath-attacher-0
	b779d624f8c1e       volcanosh/vc-scheduler@sha256:64d6efcf1a48366201aafcaf1bd4cb6d66246ec1c395ddb0deefe11350bcebba                                               2 minutes ago        Running             volcano-scheduler                        0                   a5266dbc33e34       volcano-scheduler-765f888978-wnvbv
	b50d0fce72bbb       volcanosh/vc-controller-manager@sha256:1dd0973f67becc3336f009cce4eac8677d857aaf4ba766cfff371ad34dfc34cf                                      2 minutes ago        Running             volcano-controller                       0                   8a32041790022       volcano-controller-86c5446455-2rpt8
	04488bb0bbd90       684c5ea3b61b2                                                                                                                                2 minutes ago        Exited              patch                                    1                   e18c6a5c4e27e       ingress-nginx-admission-patch-snzrm
	fbadd6953050d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366                   2 minutes ago        Exited              create                                   0                   1e9931885f3d8       ingress-nginx-admission-create-zpfrz
	0f82e6b569f5a       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   98149fa682829       snapshot-controller-745499f584-k4k2k
	74cae55076f9c       volcanosh/vc-webhook-manager@sha256:082b6a3b7b8b69d98541a8ea56958ef427fdba54ea555870799f8c9ec2754c1b                                         2 minutes ago        Exited              main                                     0                   aefc1691ec18c       volcano-admission-init-b7g2b
	9cb5e03ccb6ff       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   eb38a67292510       snapshot-controller-745499f584-z6wvw
	15d52d280f7bb       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       2 minutes ago        Running             local-path-provisioner                   0                   546fed308478e       local-path-provisioner-8d985888d-gdmxf
	1ea44e87f6c58       registry.k8s.io/metrics-server/metrics-server@sha256:db3800085a0957083930c3932b17580eec652cfb6156a05c0f79c7543e80d17a                        3 minutes ago        Running             metrics-server                           0                   3130e693d45f5       metrics-server-c59844bb4-84pkp
	1470100f242dd       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                                        3 minutes ago        Running             yakd                                     0                   2ad7419a228c2       yakd-dashboard-5ddbf7d777-4spfm
	7c01f3a10c710       nvcr.io/nvidia/k8s-device-plugin@sha256:1aff0e9f0759758f87cb158d78241472af3a76cdc631f01ab395f997fa80f707                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   11e12bc036798       nvidia-device-plugin-daemonset-xqxxl
	ed452a02a8ae8       gcr.io/cloud-spanner-emulator/emulator@sha256:6a72be4b6978a014035656e130840ad1bc06c8aa7c4de78871464ad5714565d4                               3 minutes ago        Running             cloud-spanner-emulator                   0                   ac3375cf07553       cloud-spanner-emulator-6fcd4f6f98-dml7c
	e8676e5916733       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             4 minutes ago        Running             minikube-ingress-dns                     0                   db2a084e38d0b       kube-ingress-dns-minikube
	2cd2862d48a28       6e38f40d628db                                                                                                                                4 minutes ago        Running             storage-provisioner                      0                   ea917432cbb55       storage-provisioner
	84d70d41a0269       cbb01a7bd410d                                                                                                                                4 minutes ago        Running             coredns                                  0                   aad8cfe6d02e2       coredns-7db6d8ff4d-9fb5n
	66b3bcd8cf5bd       747097150317f                                                                                                                                4 minutes ago        Running             kube-proxy                               0                   3ae1fa8124756       kube-proxy-87ftx
	6536ca29f3e62       3861cfcd7c04c                                                                                                                                5 minutes ago        Running             etcd                                     0                   36379a9dec268       etcd-addons-605800
	0d745338ef171       25a1387cdab82                                                                                                                                5 minutes ago        Running             kube-controller-manager                  0                   463e786a8db5c       kube-controller-manager-addons-605800
	d53d3a5b62c5b       91be940803172                                                                                                                                5 minutes ago        Running             kube-apiserver                           0                   1a1f076610538       kube-apiserver-addons-605800
	b9f517a8ee9a7       a52dc94f0a912                                                                                                                                5 minutes ago        Running             kube-scheduler                           0                   525ae837a5d42       kube-scheduler-addons-605800
	
	
	==> controller_ingress [c2f032075d9d] <==
	I0612 20:03:55.117302       7 controller.go:190] "Configuration changes detected, backend reload required"
	I0612 20:03:55.140833       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0612 20:03:55.140988       7 status.go:84] "New leader elected" identity="ingress-nginx-controller-768f948f8f-tm2g7"
	I0612 20:03:55.160208       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-768f948f8f-tm2g7" node="addons-605800"
	I0612 20:03:55.200503       7 controller.go:210] "Backend successfully reloaded"
	I0612 20:03:55.201173       7 controller.go:221] "Initial sync, sleeping for 1 second"
	I0612 20:03:55.201773       7 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-768f948f8f-tm2g7", UID:"e88e39d8-ea7a-414b-ad58-03fc311834fb", APIVersion:"v1", ResourceVersion:"713", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0612 20:05:24.667508       7 controller.go:1107] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0612 20:05:24.701106       7 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.034s renderingIngressLength:1 renderingIngressTime:0.001s admissionTime:0.035s testedConfigurationSize:18.1kB}
	I0612 20:05:24.701216       7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0612 20:05:24.715928       7 store.go:440] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	I0612 20:05:24.718029       7 event.go:364] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"a80773a4-f113-427e-97bb-3d315c16b868", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1693", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0612 20:05:27.285296       7 controller.go:1213] Service "default/nginx" does not have any active Endpoint.
	I0612 20:05:27.285870       7 controller.go:190] "Configuration changes detected, backend reload required"
	I0612 20:05:27.395076       7 controller.go:210] "Backend successfully reloaded"
	I0612 20:05:27.396118       7 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-768f948f8f-tm2g7", UID:"e88e39d8-ea7a-414b-ad58-03fc311834fb", APIVersion:"v1", ResourceVersion:"713", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0612 20:05:30.618902       7 controller.go:1213] Service "default/nginx" does not have any active Endpoint.
	W0612 20:05:47.776377       7 controller.go:1107] Error obtaining Endpoints for Service "kube-system/hello-world-app": no object matching key "kube-system/hello-world-app" in local store
	I0612 20:05:47.875415       7 admission.go:149] processed ingress via admission controller {testedIngressLength:2 testedIngressTime:0.099s renderingIngressLength:2 renderingIngressTime:0.001s admissionTime:0.1s testedConfigurationSize:26.0kB}
	I0612 20:05:47.875768       7 main.go:107] "successfully validated configuration, accepting" ingress="kube-system/example-ingress"
	I0612 20:05:47.893838       7 store.go:440] "Found valid IngressClass" ingress="kube-system/example-ingress" ingressclass="nginx"
	I0612 20:05:47.894902       7 event.go:364] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"kube-system", Name:"example-ingress", UID:"327caf30-0d6c-41e6-91c8-55d0a571242c", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1781", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0612 20:05:47.899354       7 controller.go:1107] Error obtaining Endpoints for Service "kube-system/hello-world-app": no object matching key "kube-system/hello-world-app" in local store
	I0612 20:05:47.899921       7 controller.go:190] "Configuration changes detected, backend reload required"
	10.244.0.1 - - [12/Jun/2024:20:05:47 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/8.5.0" 80 0.002 [default-nginx-80] [] 10.244.0.30:80 615 0.002 200 b3af4c9979bc8dc24a8a6ddb72b3dae1
	
	
	==> coredns [84d70d41a026] <==
	[INFO] 10.244.0.6:42119 - 10565 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.001091397s
	[INFO] 10.244.0.6:49403 - 21243 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0001511s
	[INFO] 10.244.0.6:49403 - 2532 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000221499s
	[INFO] 10.244.0.6:55877 - 3155 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000128299s
	[INFO] 10.244.0.6:55877 - 52829 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0001249s
	[INFO] 10.244.0.6:55047 - 23208 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0001758s
	[INFO] 10.244.0.6:55047 - 8362 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0001599s
	[INFO] 10.244.0.6:38816 - 18713 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000184899s
	[INFO] 10.244.0.6:38816 - 8991 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0001521s
	[INFO] 10.244.0.6:38113 - 8187 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0001255s
	[INFO] 10.244.0.6:38113 - 26105 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0003495s
	[INFO] 10.244.0.6:40871 - 61538 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000913s
	[INFO] 10.244.0.6:40871 - 36452 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000473s
	[INFO] 10.244.0.6:45408 - 31625 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0000615s
	[INFO] 10.244.0.6:45408 - 22967 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0001773s
	[INFO] 10.244.0.26:52516 - 20145 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0004792s
	[INFO] 10.244.0.26:58763 - 49875 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0002905s
	[INFO] 10.244.0.26:53302 - 2395 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000180299s
	[INFO] 10.244.0.26:59507 - 57599 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0000828s
	[INFO] 10.244.0.26:38832 - 20042 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.0002838s
	[INFO] 10.244.0.26:55498 - 26080 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.0000744s
	[INFO] 10.244.0.26:50591 - 60014 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.008663487s
	[INFO] 10.244.0.26:45196 - 7328 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 192 0.010144286s
	[INFO] 10.244.0.28:35597 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000457698s
	[INFO] 10.244.0.28:42964 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000351798s
	
	
	==> describe nodes <==
	Name:               addons-605800
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-605800
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97
	                    minikube.k8s.io/name=addons-605800
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_12T13_00_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-605800
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-605800"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 20:00:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-605800
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 20:05:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 20:05:40 +0000   Wed, 12 Jun 2024 20:00:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 20:05:40 +0000   Wed, 12 Jun 2024 20:00:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 20:05:40 +0000   Wed, 12 Jun 2024 20:00:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 20:05:40 +0000   Wed, 12 Jun 2024 20:00:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.204.232
	  Hostname:    addons-605800
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 bcf7fa20fef74b09bd7ce382085f4112
	  System UUID:                ffa95b0c-c375-4440-86f6-7e779c6f1eba
	  Boot ID:                    3e9b88e1-3ab7-4510-8b13-4ba81fefdd4b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6fcd4f6f98-dml7c      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  default                     hello-world-app-86c47465fc-nccwv             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         1s
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  default                     task-pv-pod                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	  gcp-auth                    gcp-auth-5db96cd9b4-d25wr                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  headlamp                    headlamp-7fc69f7444-7f6zf                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	  ingress-nginx               ingress-nginx-controller-768f948f8f-tm2g7    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         4m26s
	  kube-system                 coredns-7db6d8ff4d-9fb5n                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m1s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	  kube-system                 csi-hostpathplugin-9r4zm                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 etcd-addons-605800                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m15s
	  kube-system                 kube-apiserver-addons-605800                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	  kube-system                 kube-controller-manager-addons-605800        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 kube-proxy-87ftx                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 kube-scheduler-addons-605800                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	  kube-system                 metrics-server-c59844bb4-84pkp               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m28s
	  kube-system                 nvidia-device-plugin-daemonset-xqxxl         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 snapshot-controller-745499f584-k4k2k         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 snapshot-controller-745499f584-z6wvw         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  local-path-storage          local-path-provisioner-8d985888d-gdmxf       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  volcano-system              volcano-admission-7b497cf95b-t7tlr           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  volcano-system              volcano-controller-86c5446455-2rpt8          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  volcano-system              volcano-scheduler-765f888978-wnvbv           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-4spfm              0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             588Mi (15%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m51s  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m23s  kubelet          Node addons-605800 status is now: NodeHasSufficientMemory
	  Normal  Starting                 5m15s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m15s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m15s  kubelet          Node addons-605800 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m15s  kubelet          Node addons-605800 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m15s  kubelet          Node addons-605800 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m10s  kubelet          Node addons-605800 status is now: NodeReady
	  Normal  RegisteredNode           5m2s   node-controller  Node addons-605800 event: Registered Node addons-605800 in Controller
	
	
	==> dmesg <==
	[Jun12 20:01] kauditd_printk_skb: 5 callbacks suppressed
	[  +7.565165] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.251839] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.003570] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.186374] kauditd_printk_skb: 86 callbacks suppressed
	[ +16.688382] kauditd_printk_skb: 65 callbacks suppressed
	[Jun12 20:02] kauditd_printk_skb: 2 callbacks suppressed
	[ +19.882027] kauditd_printk_skb: 2 callbacks suppressed
	[Jun12 20:03] kauditd_printk_skb: 31 callbacks suppressed
	[ +10.689073] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.992428] kauditd_printk_skb: 27 callbacks suppressed
	[  +6.124049] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.998476] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.007303] kauditd_printk_skb: 25 callbacks suppressed
	[ +14.524059] kauditd_printk_skb: 11 callbacks suppressed
	[Jun12 20:04] kauditd_printk_skb: 38 callbacks suppressed
	[ +31.900736] kauditd_printk_skb: 29 callbacks suppressed
	[ +12.411094] kauditd_printk_skb: 35 callbacks suppressed
	[  +1.639485] hrtimer: interrupt took 2260796 ns
	[  +9.440174] kauditd_printk_skb: 9 callbacks suppressed
	[Jun12 20:05] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.549574] kauditd_printk_skb: 22 callbacks suppressed
	[ +12.007374] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.064938] kauditd_printk_skb: 48 callbacks suppressed
	[  +6.082529] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [6536ca29f3e6] <==
	{"level":"warn","ts":"2024-06-12T20:03:56.309262Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.732857ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-06-12T20:03:56.31174Z","caller":"traceutil/trace.go:171","msg":"trace[431137325] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1352; }","duration":"183.233951ms","start":"2024-06-12T20:03:56.128496Z","end":"2024-06-12T20:03:56.31173Z","steps":["trace[431137325] 'range keys from in-memory index tree'  (duration: 180.166059ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-12T20:03:56.311433Z","caller":"traceutil/trace.go:171","msg":"trace[99172253] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1352; }","duration":"182.666452ms","start":"2024-06-12T20:03:56.128757Z","end":"2024-06-12T20:03:56.311423Z","steps":["trace[99172253] 'range keys from in-memory index tree'  (duration: 179.853759ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T20:04:02.055523Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"341.701454ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/\" range_end:\"/registry/daemonsets0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-06-12T20:04:02.055767Z","caller":"traceutil/trace.go:171","msg":"trace[1342372509] range","detail":"{range_begin:/registry/daemonsets/; range_end:/registry/daemonsets0; response_count:0; response_revision:1373; }","duration":"341.996253ms","start":"2024-06-12T20:04:01.713756Z","end":"2024-06-12T20:04:02.055752Z","steps":["trace[1342372509] 'count revisions from in-memory index tree'  (duration: 341.619454ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T20:04:02.055804Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-12T20:04:01.713741Z","time spent":"342.050753ms","remote":"127.0.0.1:46268","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":5,"response size":29,"request content":"key:\"/registry/daemonsets/\" range_end:\"/registry/daemonsets0\" count_only:true "}
	{"level":"warn","ts":"2024-06-12T20:04:50.352874Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"315.242249ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11509"}
	{"level":"info","ts":"2024-06-12T20:04:50.353028Z","caller":"traceutil/trace.go:171","msg":"trace[1551478180] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1508; }","duration":"315.582749ms","start":"2024-06-12T20:04:50.037427Z","end":"2024-06-12T20:04:50.35301Z","steps":["trace[1551478180] 'range keys from in-memory index tree'  (duration: 314.82835ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T20:04:50.353101Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-12T20:04:50.037413Z","time spent":"315.677049ms","remote":"127.0.0.1:45982","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11531,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-06-12T20:04:50.354435Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.03751ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-12T20:04:50.35448Z","caller":"traceutil/trace.go:171","msg":"trace[1262696754] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1508; }","duration":"203.10971ms","start":"2024-06-12T20:04:50.151363Z","end":"2024-06-12T20:04:50.354473Z","steps":["trace[1262696754] 'range keys from in-memory index tree'  (duration: 202.90121ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-12T20:05:15.357905Z","caller":"traceutil/trace.go:171","msg":"trace[1403666760] transaction","detail":"{read_only:false; response_revision:1630; number_of_response:1; }","duration":"255.235902ms","start":"2024-06-12T20:05:15.102647Z","end":"2024-06-12T20:05:15.357883Z","steps":["trace[1403666760] 'process raft request'  (duration: 255.123602ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-12T20:05:15.358583Z","caller":"traceutil/trace.go:171","msg":"trace[2025101863] linearizableReadLoop","detail":"{readStateIndex:1707; appliedIndex:1707; }","duration":"207.14042ms","start":"2024-06-12T20:05:15.151433Z","end":"2024-06-12T20:05:15.358573Z","steps":["trace[2025101863] 'read index received'  (duration: 207.09502ms)","trace[2025101863] 'applied index is now lower than readState.Index'  (duration: 44.7µs)"],"step_count":2}
	{"level":"warn","ts":"2024-06-12T20:05:15.358854Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"207.40672ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-12T20:05:15.358883Z","caller":"traceutil/trace.go:171","msg":"trace[2144259326] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1630; }","duration":"207.48242ms","start":"2024-06-12T20:05:15.151392Z","end":"2024-06-12T20:05:15.358875Z","steps":["trace[2144259326] 'agreement among raft nodes before linearized reading'  (duration: 207.40882ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-12T20:05:17.551623Z","caller":"traceutil/trace.go:171","msg":"trace[241596436] transaction","detail":"{read_only:false; response_revision:1631; number_of_response:1; }","duration":"182.368927ms","start":"2024-06-12T20:05:17.369234Z","end":"2024-06-12T20:05:17.551603Z","steps":["trace[241596436] 'process raft request'  (duration: 182.196527ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T20:05:17.861912Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.652818ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumes/\" range_end:\"/registry/persistentvolumes0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-12T20:05:17.862003Z","caller":"traceutil/trace.go:171","msg":"trace[568538485] range","detail":"{range_begin:/registry/persistentvolumes/; range_end:/registry/persistentvolumes0; response_count:0; response_revision:1631; }","duration":"204.783517ms","start":"2024-06-12T20:05:17.657202Z","end":"2024-06-12T20:05:17.861985Z","steps":["trace[568538485] 'count revisions from in-memory index tree'  (duration: 204.572618ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-12T20:05:19.75062Z","caller":"traceutil/trace.go:171","msg":"trace[952742822] transaction","detail":"{read_only:false; response_revision:1656; number_of_response:1; }","duration":"167.64183ms","start":"2024-06-12T20:05:19.582959Z","end":"2024-06-12T20:05:19.750601Z","steps":["trace[952742822] 'process raft request'  (duration: 167.48983ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-12T20:05:20.20664Z","caller":"traceutil/trace.go:171","msg":"trace[1500716557] transaction","detail":"{read_only:false; response_revision:1657; number_of_response:1; }","duration":"266.256986ms","start":"2024-06-12T20:05:19.940365Z","end":"2024-06-12T20:05:20.206621Z","steps":["trace[1500716557] 'process raft request'  (duration: 265.793087ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T20:05:28.815086Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.26902ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-06-12T20:05:28.815207Z","caller":"traceutil/trace.go:171","msg":"trace[91289776] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:0; response_revision:1738; }","duration":"136.422719ms","start":"2024-06-12T20:05:28.678768Z","end":"2024-06-12T20:05:28.815191Z","steps":["trace[91289776] 'count revisions from in-memory index tree'  (duration: 136.09502ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T20:05:28.815255Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.347266ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:9289"}
	{"level":"info","ts":"2024-06-12T20:05:28.815289Z","caller":"traceutil/trace.go:171","msg":"trace[2023639611] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:1738; }","duration":"113.421166ms","start":"2024-06-12T20:05:28.701859Z","end":"2024-06-12T20:05:28.815281Z","steps":["trace[2023639611] 'range keys from in-memory index tree'  (duration: 113.055867ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-12T20:05:42.658294Z","caller":"traceutil/trace.go:171","msg":"trace[1641318053] transaction","detail":"{read_only:false; response_revision:1767; number_of_response:1; }","duration":"113.408638ms","start":"2024-06-12T20:05:42.544863Z","end":"2024-06-12T20:05:42.658272Z","steps":["trace[1641318053] 'process raft request'  (duration: 54.120475ms)","trace[1641318053] 'compare'  (duration: 58.838164ms)"],"step_count":2}
	
	
	==> gcp-auth [2822940267e0] <==
	2024/06/12 20:04:50 GCP Auth Webhook started!
	2024/06/12 20:04:57 Ready to marshal response ...
	2024/06/12 20:04:57 Ready to write response ...
	2024/06/12 20:05:02 Ready to marshal response ...
	2024/06/12 20:05:02 Ready to write response ...
	2024/06/12 20:05:08 Ready to marshal response ...
	2024/06/12 20:05:08 Ready to write response ...
	2024/06/12 20:05:08 Ready to marshal response ...
	2024/06/12 20:05:08 Ready to write response ...
	2024/06/12 20:05:08 Ready to marshal response ...
	2024/06/12 20:05:08 Ready to write response ...
	2024/06/12 20:05:25 Ready to marshal response ...
	2024/06/12 20:05:25 Ready to write response ...
	2024/06/12 20:05:27 Ready to marshal response ...
	2024/06/12 20:05:27 Ready to write response ...
	2024/06/12 20:05:47 Ready to marshal response ...
	2024/06/12 20:05:47 Ready to write response ...
	
	
	==> kernel <==
	 20:05:48 up 7 min,  0 users,  load average: 2.97, 2.67, 1.36
	Linux addons-605800 5.10.207 #1 SMP Tue Jun 11 00:16:05 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d53d3a5b62c5] <==
	W0612 20:03:35.559219       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.0.177:443: connect: connection refused
	W0612 20:03:36.595214       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.0.177:443: connect: connection refused
	W0612 20:03:37.647894       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.0.177:443: connect: connection refused
	W0612 20:03:38.682375       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.0.177:443: connect: connection refused
	W0612 20:03:39.718067       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.0.177:443: connect: connection refused
	W0612 20:03:40.795147       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.0.177:443: connect: connection refused
	W0612 20:03:41.821419       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.0.177:443: connect: connection refused
	W0612 20:03:42.846963       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.0.177:443: connect: connection refused
	W0612 20:03:43.891825       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.0.177:443: connect: connection refused
	W0612 20:03:44.954757       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.0.177:443: connect: connection refused
	W0612 20:03:46.045689       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.0.177:443: connect: connection refused
	W0612 20:04:14.354818       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.249.110:443: connect: connection refused
	E0612 20:04:14.354875       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.249.110:443: connect: connection refused
	W0612 20:04:33.466504       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.249.110:443: connect: connection refused
	E0612 20:04:33.466614       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.249.110:443: connect: connection refused
	W0612 20:04:33.542656       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.249.110:443: connect: connection refused
	E0612 20:04:33.542941       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.249.110:443: connect: connection refused
	I0612 20:05:08.554212       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.68.102"}
	I0612 20:05:13.195405       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0612 20:05:14.268937       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0612 20:05:21.830508       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"gadget\" not found]"
	E0612 20:05:21.840244       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"gadget\" not found]"
	I0612 20:05:24.705038       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0612 20:05:25.122492       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.140.16"}
	I0612 20:05:48.133365       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.62.142"}
	
	
	==> kube-controller-manager [0d745338ef17] <==
	I0612 20:05:08.209921       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0612 20:05:08.781187       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7fc69f7444" duration="134.188316ms"
	I0612 20:05:08.849073       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7fc69f7444" duration="65.665862ms"
	I0612 20:05:08.851418       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7fc69f7444" duration="62.8µs"
	E0612 20:05:14.278678       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W0612 20:05:15.717048       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0612 20:05:15.717189       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0612 20:05:16.988682       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0612 20:05:16.989361       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 20:05:17.853885       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0612 20:05:17.854090       1 shared_informer.go:320] Caches are synced for garbage collector
	W0612 20:05:18.613807       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0612 20:05:18.613947       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0612 20:05:20.484634       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-6677d64bcd" duration="106.9µs"
	I0612 20:05:22.421056       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7fc69f7444" duration="30.319276ms"
	I0612 20:05:22.424914       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7fc69f7444" duration="37.2µs"
	W0612 20:05:23.633555       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0612 20:05:23.633610       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0612 20:05:25.821193       1 replica_set.go:676] "Finished syncing" logger="replicationcontroller-controller" kind="ReplicationController" key="kube-system/registry" duration="8.7µs"
	I0612 20:05:28.890145       1 namespace_controller.go:182] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	W0612 20:05:35.429039       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0612 20:05:35.429104       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0612 20:05:47.994080       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="223.610777ms"
	I0612 20:05:48.081189       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="87.027596ms"
	I0612 20:05:48.081247       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="26.5µs"
	
	
	==> kube-proxy [66b3bcd8cf5b] <==
	I0612 20:00:56.025879       1 server_linux.go:69] "Using iptables proxy"
	I0612 20:00:56.125149       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.204.232"]
	I0612 20:00:56.805135       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 20:00:56.805199       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 20:00:56.805230       1 server_linux.go:165] "Using iptables Proxier"
	I0612 20:00:56.876294       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 20:00:56.876860       1 server.go:872] "Version info" version="v1.30.1"
	I0612 20:00:56.876896       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 20:00:57.004191       1 config.go:192] "Starting service config controller"
	I0612 20:00:57.004225       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 20:00:57.004328       1 config.go:101] "Starting endpoint slice config controller"
	I0612 20:00:57.004337       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 20:00:57.014768       1 config.go:319] "Starting node config controller"
	I0612 20:00:57.014789       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 20:00:57.302487       1 shared_informer.go:320] Caches are synced for node config
	I0612 20:00:57.305422       1 shared_informer.go:320] Caches are synced for service config
	I0612 20:00:57.305588       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b9f517a8ee9a] <==
	W0612 20:00:31.240344       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0612 20:00:31.240876       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0612 20:00:31.317346       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0612 20:00:31.317656       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0612 20:00:31.573717       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0612 20:00:31.573973       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0612 20:00:31.616230       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0612 20:00:31.616341       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0612 20:00:31.626909       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0612 20:00:31.627419       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0612 20:00:31.655736       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0612 20:00:31.655781       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0612 20:00:31.673850       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0612 20:00:31.673893       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0612 20:00:31.680818       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0612 20:00:31.680860       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0612 20:00:31.691761       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0612 20:00:31.691802       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0612 20:00:31.705842       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0612 20:00:31.705924       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0612 20:00:31.730770       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0612 20:00:31.731365       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0612 20:00:31.845104       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0612 20:00:31.845294       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0612 20:00:33.361510       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 12 20:05:27 addons-605800 kubelet[2127]: I0612 20:05:27.830292    2127 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6c13dcd-e52f-4d4b-ab41-b525ce55df5f" path="/var/lib/kubelet/pods/e6c13dcd-e52f-4d4b-ab41-b525ce55df5f/volumes"
	Jun 12 20:05:27 addons-605800 kubelet[2127]: I0612 20:05:27.832228    2127 topology_manager.go:215] "Topology Admit Handler" podUID="ad273e07-0e40-4294-a1f3-bb7746050258" podNamespace="default" podName="task-pv-pod"
	Jun 12 20:05:27 addons-605800 kubelet[2127]: E0612 20:05:27.832632    2127 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8078700f-0fd4-43b4-8a9b-2d1e4f18394f" containerName="gadget"
	Jun 12 20:05:27 addons-605800 kubelet[2127]: E0612 20:05:27.832774    2127 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e6c13dcd-e52f-4d4b-ab41-b525ce55df5f" containerName="registry"
	Jun 12 20:05:27 addons-605800 kubelet[2127]: E0612 20:05:27.832913    2127 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="04cfc241-3db6-42fc-965e-b4d28c1dd4e7" containerName="registry-proxy"
	Jun 12 20:05:27 addons-605800 kubelet[2127]: I0612 20:05:27.833139    2127 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6c13dcd-e52f-4d4b-ab41-b525ce55df5f" containerName="registry"
	Jun 12 20:05:27 addons-605800 kubelet[2127]: I0612 20:05:27.833278    2127 memory_manager.go:354] "RemoveStaleState removing state" podUID="04cfc241-3db6-42fc-965e-b4d28c1dd4e7" containerName="registry-proxy"
	Jun 12 20:05:27 addons-605800 kubelet[2127]: I0612 20:05:27.988005    2127 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gpng\" (UniqueName: \"kubernetes.io/projected/ad273e07-0e40-4294-a1f3-bb7746050258-kube-api-access-4gpng\") pod \"task-pv-pod\" (UID: \"ad273e07-0e40-4294-a1f3-bb7746050258\") " pod="default/task-pv-pod"
	Jun 12 20:05:27 addons-605800 kubelet[2127]: I0612 20:05:27.988274    2127 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ad273e07-0e40-4294-a1f3-bb7746050258-gcp-creds\") pod \"task-pv-pod\" (UID: \"ad273e07-0e40-4294-a1f3-bb7746050258\") " pod="default/task-pv-pod"
	Jun 12 20:05:27 addons-605800 kubelet[2127]: I0612 20:05:27.988940    2127 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8b8a0d13-368b-442c-ab32-2ac55f9d8b3b\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^1c257ada-28f7-11ef-92a9-6eb2a83740b3\") pod \"task-pv-pod\" (UID: \"ad273e07-0e40-4294-a1f3-bb7746050258\") " pod="default/task-pv-pod"
	Jun 12 20:05:28 addons-605800 kubelet[2127]: I0612 20:05:28.133763    2127 operation_generator.go:664] "MountVolume.MountDevice succeeded for volume \"pvc-8b8a0d13-368b-442c-ab32-2ac55f9d8b3b\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^1c257ada-28f7-11ef-92a9-6eb2a83740b3\") pod \"task-pv-pod\" (UID: \"ad273e07-0e40-4294-a1f3-bb7746050258\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/5858eaa8bf1c24c92bcc5962520ac747c79c03a9a808abd1f74c3722fe5f475d/globalmount\"" pod="default/task-pv-pod"
	Jun 12 20:05:29 addons-605800 kubelet[2127]: I0612 20:05:29.791272    2127 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04cfc241-3db6-42fc-965e-b4d28c1dd4e7" path="/var/lib/kubelet/pods/04cfc241-3db6-42fc-965e-b4d28c1dd4e7/volumes"
	Jun 12 20:05:33 addons-605800 kubelet[2127]: E0612 20:05:33.792630    2127 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 20:05:33 addons-605800 kubelet[2127]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 20:05:33 addons-605800 kubelet[2127]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 20:05:33 addons-605800 kubelet[2127]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 20:05:33 addons-605800 kubelet[2127]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 20:05:34 addons-605800 kubelet[2127]: I0612 20:05:34.010164    2127 scope.go:117] "RemoveContainer" containerID="c9dc1d2b2898ba8b133286aa2deba508b8ed26e055967aa78e5a7f1dca65fba3"
	Jun 12 20:05:34 addons-605800 kubelet[2127]: I0612 20:05:34.108951    2127 scope.go:117] "RemoveContainer" containerID="225100dd5286d56bf517555c52e1ff34bff4742eb45928cc262cde7ecf5c7a8b"
	Jun 12 20:05:34 addons-605800 kubelet[2127]: I0612 20:05:34.171401    2127 scope.go:117] "RemoveContainer" containerID="f41205ba49bdcf9fe3b40d1a122fe3c48e9f9c608c15e79f66fff35067959fa8"
	Jun 12 20:05:46 addons-605800 kubelet[2127]: I0612 20:05:46.611244    2127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=16.561781807 podStartE2EDuration="21.611190505s" podCreationTimestamp="2024-06-12 20:05:25 +0000 UTC" firstStartedPulling="2024-06-12 20:05:26.124924138 +0000 UTC m=+292.642115955" lastFinishedPulling="2024-06-12 20:05:31.174332736 +0000 UTC m=+297.691524653" observedRunningTime="2024-06-12 20:05:31.96507405 +0000 UTC m=+298.482265967" watchObservedRunningTime="2024-06-12 20:05:46.611190505 +0000 UTC m=+313.128382422"
	Jun 12 20:05:48 addons-605800 kubelet[2127]: I0612 20:05:48.000194    2127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod" podStartSLOduration=5.594836409 podStartE2EDuration="21.000171858s" podCreationTimestamp="2024-06-12 20:05:27 +0000 UTC" firstStartedPulling="2024-06-12 20:05:29.563651082 +0000 UTC m=+296.080842999" lastFinishedPulling="2024-06-12 20:05:44.968986631 +0000 UTC m=+311.486178448" observedRunningTime="2024-06-12 20:05:46.614905096 +0000 UTC m=+313.132096913" watchObservedRunningTime="2024-06-12 20:05:48.000171858 +0000 UTC m=+314.517363675"
	Jun 12 20:05:48 addons-605800 kubelet[2127]: I0612 20:05:48.000950    2127 topology_manager.go:215] "Topology Admit Handler" podUID="8072f14e-1f42-45e0-8792-4246eb74bdc8" podNamespace="default" podName="hello-world-app-86c47465fc-nccwv"
	Jun 12 20:05:48 addons-605800 kubelet[2127]: I0612 20:05:48.141820    2127 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/8072f14e-1f42-45e0-8792-4246eb74bdc8-gcp-creds\") pod \"hello-world-app-86c47465fc-nccwv\" (UID: \"8072f14e-1f42-45e0-8792-4246eb74bdc8\") " pod="default/hello-world-app-86c47465fc-nccwv"
	Jun 12 20:05:48 addons-605800 kubelet[2127]: I0612 20:05:48.141886    2127 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t67qf\" (UniqueName: \"kubernetes.io/projected/8072f14e-1f42-45e0-8792-4246eb74bdc8-kube-api-access-t67qf\") pod \"hello-world-app-86c47465fc-nccwv\" (UID: \"8072f14e-1f42-45e0-8792-4246eb74bdc8\") " pod="default/hello-world-app-86c47465fc-nccwv"
	
	
	==> storage-provisioner [2cd2862d48a2] <==
	I0612 20:01:17.274677       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0612 20:01:17.322842       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0612 20:01:17.325354       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0612 20:01:17.348431       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0612 20:01:17.356057       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-605800_1b06ff34-769a-4569-a5d4-32b6d0e29125!
	I0612 20:01:17.357196       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"370c9373-ebf5-4a26-bfeb-40d33d90fb13", APIVersion:"v1", ResourceVersion:"508", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-605800_1b06ff34-769a-4569-a5d4-32b6d0e29125 became leader
	I0612 20:01:17.458642       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-605800_1b06ff34-769a-4569-a5d4-32b6d0e29125!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0612 13:05:39.376301   11924 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-605800 -n addons-605800
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-605800 -n addons-605800: (13.2299465s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-605800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-zpfrz ingress-nginx-admission-patch-snzrm volcano-admission-init-b7g2b
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-605800 describe pod ingress-nginx-admission-create-zpfrz ingress-nginx-admission-patch-snzrm volcano-admission-init-b7g2b
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-605800 describe pod ingress-nginx-admission-create-zpfrz ingress-nginx-admission-patch-snzrm volcano-admission-init-b7g2b: exit status 1 (169.3949ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-zpfrz" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-snzrm" not found
	Error from server (NotFound): pods "volcano-admission-init-b7g2b" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-605800 describe pod ingress-nginx-admission-create-zpfrz ingress-nginx-admission-patch-snzrm volcano-admission-init-b7g2b: exit status 1
--- FAIL: TestAddons/parallel/Registry (72.15s)

                                                
                                    
x
+
TestErrorSpam/setup (189.33s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-736400 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 --driver=hyperv
E0612 13:09:51.894176    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
E0612 13:09:51.921377    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
E0612 13:09:51.949221    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
E0612 13:09:51.977644    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
E0612 13:09:52.020011    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
E0612 13:09:52.112664    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
E0612 13:09:52.281063    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
E0612 13:09:52.612159    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
E0612 13:09:53.265272    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
E0612 13:09:54.548419    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
E0612 13:09:57.116419    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
E0612 13:10:02.244540    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
E0612 13:10:12.501494    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
E0612 13:10:32.986517    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
E0612 13:11:13.954555    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-736400 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 --driver=hyperv: (3m9.3313735s)
error_spam_test.go:96: unexpected stderr: "W0612 13:09:14.462741    6912 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-736400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
- KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
- MINIKUBE_LOCATION=19044
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-736400" primary control-plane node in "nospam-736400" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-736400" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0612 13:09:14.462741    6912 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (189.33s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (32.78s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-269100 -n functional-269100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-269100 -n functional-269100: (11.7761135s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 logs -n 25: (8.2582947s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-736400 --log_dir                                     | nospam-736400     | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:13 PDT | 12 Jun 24 13:13 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-736400 --log_dir                                     | nospam-736400     | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:13 PDT | 12 Jun 24 13:13 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-736400 --log_dir                                     | nospam-736400     | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:13 PDT | 12 Jun 24 13:13 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-736400 --log_dir                                     | nospam-736400     | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:13 PDT | 12 Jun 24 13:13 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-736400 --log_dir                                     | nospam-736400     | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:13 PDT | 12 Jun 24 13:14 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-736400 --log_dir                                     | nospam-736400     | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:14 PDT | 12 Jun 24 13:14 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-736400 --log_dir                                     | nospam-736400     | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:14 PDT | 12 Jun 24 13:14 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-736400                                            | nospam-736400     | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:14 PDT | 12 Jun 24 13:15 PDT |
	| start   | -p functional-269100                                        | functional-269100 | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:15 PDT | 12 Jun 24 13:19 PDT |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-269100                                        | functional-269100 | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:19 PDT | 12 Jun 24 13:21 PDT |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-269100 cache add                                 | functional-269100 | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:21 PDT | 12 Jun 24 13:21 PDT |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-269100 cache add                                 | functional-269100 | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:21 PDT | 12 Jun 24 13:21 PDT |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-269100 cache add                                 | functional-269100 | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:21 PDT | 12 Jun 24 13:21 PDT |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-269100 cache add                                 | functional-269100 | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:21 PDT | 12 Jun 24 13:21 PDT |
	|         | minikube-local-cache-test:functional-269100                 |                   |                   |         |                     |                     |
	| cache   | functional-269100 cache delete                              | functional-269100 | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:21 PDT | 12 Jun 24 13:21 PDT |
	|         | minikube-local-cache-test:functional-269100                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:21 PDT | 12 Jun 24 13:21 PDT |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:21 PDT | 12 Jun 24 13:21 PDT |
	| ssh     | functional-269100 ssh sudo                                  | functional-269100 | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:21 PDT | 12 Jun 24 13:21 PDT |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-269100                                           | functional-269100 | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:21 PDT | 12 Jun 24 13:22 PDT |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-269100 ssh                                       | functional-269100 | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:22 PDT |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-269100 cache reload                              | functional-269100 | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:22 PDT | 12 Jun 24 13:22 PDT |
	| ssh     | functional-269100 ssh                                       | functional-269100 | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:22 PDT | 12 Jun 24 13:22 PDT |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:22 PDT | 12 Jun 24 13:22 PDT |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:22 PDT | 12 Jun 24 13:22 PDT |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-269100 kubectl --                                | functional-269100 | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:22 PDT | 12 Jun 24 13:22 PDT |
	|         | --context functional-269100                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/12 13:19:06
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0612 13:19:06.987645   10368 out.go:291] Setting OutFile to fd 616 ...
	I0612 13:19:06.987645   10368 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 13:19:06.987645   10368 out.go:304] Setting ErrFile to fd 664...
	I0612 13:19:06.987645   10368 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 13:19:07.009692   10368 out.go:298] Setting JSON to false
	I0612 13:19:07.013264   10368 start.go:129] hostinfo: {"hostname":"minikube1","uptime":21899,"bootTime":1718201647,"procs":203,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4529 Build 19045.4529","kernelVersion":"10.0.19045.4529 Build 19045.4529","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0612 13:19:07.013264   10368 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0612 13:19:07.018645   10368 out.go:177] * [functional-269100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	I0612 13:19:07.023174   10368 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 13:19:07.022373   10368 notify.go:220] Checking for updates...
	I0612 13:19:07.026163   10368 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 13:19:07.029813   10368 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0612 13:19:07.033322   10368 out.go:177]   - MINIKUBE_LOCATION=19044
	I0612 13:19:07.035055   10368 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 13:19:07.038355   10368 config.go:182] Loaded profile config "functional-269100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 13:19:07.039465   10368 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 13:19:12.222719   10368 out.go:177] * Using the hyperv driver based on existing profile
	I0612 13:19:12.226423   10368 start.go:297] selected driver: hyperv
	I0612 13:19:12.226423   10368 start.go:901] validating driver "hyperv" against &{Name:functional-269100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:functional-269100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.195.181 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 13:19:12.226741   10368 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 13:19:12.273778   10368 cni.go:84] Creating CNI manager for ""
	I0612 13:19:12.273778   10368 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0612 13:19:12.274541   10368 start.go:340] cluster config:
	{Name:functional-269100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-269100 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.195.181 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 13:19:12.274541   10368 iso.go:125] acquiring lock: {Name:mk052eb609047b80b971cea5054470b0706b5b41 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 13:19:12.277863   10368 out.go:177] * Starting "functional-269100" primary control-plane node in "functional-269100" cluster
	I0612 13:19:12.281841   10368 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0612 13:19:12.281841   10368 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0612 13:19:12.281841   10368 cache.go:56] Caching tarball of preloaded images
	I0612 13:19:12.281841   10368 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0612 13:19:12.281841   10368 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0612 13:19:12.281841   10368 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\config.json ...
	I0612 13:19:12.287049   10368 start.go:360] acquireMachinesLock for functional-269100: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 13:19:12.287268   10368 start.go:364] duration metric: took 125.3µs to acquireMachinesLock for "functional-269100"
	I0612 13:19:12.287268   10368 start.go:96] Skipping create...Using existing machine configuration
	I0612 13:19:12.287268   10368 fix.go:54] fixHost starting: 
	I0612 13:19:12.288148   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-269100 ).state
	I0612 13:19:14.986256   10368 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:19:14.986256   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:19:14.986325   10368 fix.go:112] recreateIfNeeded on functional-269100: state=Running err=<nil>
	W0612 13:19:14.986325   10368 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 13:19:14.989698   10368 out.go:177] * Updating the running hyperv "functional-269100" VM ...
	I0612 13:19:14.993084   10368 machine.go:94] provisionDockerMachine start ...
	I0612 13:19:14.993084   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-269100 ).state
	I0612 13:19:17.114664   10368 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:19:17.114664   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:19:17.114664   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-269100 ).networkadapters[0]).ipaddresses[0]
	I0612 13:19:19.641102   10368 main.go:141] libmachine: [stdout =====>] : 172.23.195.181
	
	I0612 13:19:19.641102   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:19:19.658225   10368 main.go:141] libmachine: Using SSH client type: native
	I0612 13:19:19.658954   10368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.195.181 22 <nil> <nil>}
	I0612 13:19:19.658954   10368 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 13:19:19.786846   10368 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-269100
	
	I0612 13:19:19.786846   10368 buildroot.go:166] provisioning hostname "functional-269100"
	I0612 13:19:19.786846   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-269100 ).state
	I0612 13:19:21.889179   10368 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:19:21.889179   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:19:21.889294   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-269100 ).networkadapters[0]).ipaddresses[0]
	I0612 13:19:24.398644   10368 main.go:141] libmachine: [stdout =====>] : 172.23.195.181
	
	I0612 13:19:24.403197   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:19:24.409253   10368 main.go:141] libmachine: Using SSH client type: native
	I0612 13:19:24.409905   10368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.195.181 22 <nil> <nil>}
	I0612 13:19:24.409987   10368 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-269100 && echo "functional-269100" | sudo tee /etc/hostname
	I0612 13:19:24.568048   10368 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-269100
	
	I0612 13:19:24.568178   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-269100 ).state
	I0612 13:19:26.644331   10368 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:19:26.644331   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:19:26.644331   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-269100 ).networkadapters[0]).ipaddresses[0]
	I0612 13:19:29.150365   10368 main.go:141] libmachine: [stdout =====>] : 172.23.195.181
	
	I0612 13:19:29.162129   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:19:29.168072   10368 main.go:141] libmachine: Using SSH client type: native
	I0612 13:19:29.168244   10368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.195.181 22 <nil> <nil>}
	I0612 13:19:29.168244   10368 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-269100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-269100/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-269100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 13:19:29.314642   10368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 13:19:29.314642   10368 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0612 13:19:29.314642   10368 buildroot.go:174] setting up certificates
	I0612 13:19:29.315180   10368 provision.go:84] configureAuth start
	I0612 13:19:29.315305   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-269100 ).state
	I0612 13:19:31.351266   10368 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:19:31.351266   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:19:31.362995   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-269100 ).networkadapters[0]).ipaddresses[0]
	I0612 13:19:33.842456   10368 main.go:141] libmachine: [stdout =====>] : 172.23.195.181
	
	I0612 13:19:33.854319   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:19:33.854319   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-269100 ).state
	I0612 13:19:35.937879   10368 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:19:35.939098   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:19:35.939165   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-269100 ).networkadapters[0]).ipaddresses[0]
	I0612 13:19:38.397740   10368 main.go:141] libmachine: [stdout =====>] : 172.23.195.181
	
	I0612 13:19:38.397740   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:19:38.401485   10368 provision.go:143] copyHostCerts
	I0612 13:19:38.401485   10368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0612 13:19:38.401485   10368 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0612 13:19:38.401485   10368 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0612 13:19:38.402412   10368 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0612 13:19:38.403716   10368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0612 13:19:38.403716   10368 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0612 13:19:38.403716   10368 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0612 13:19:38.404509   10368 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0612 13:19:38.405795   10368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0612 13:19:38.406024   10368 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0612 13:19:38.406024   10368 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0612 13:19:38.407159   10368 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0612 13:19:38.408819   10368 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-269100 san=[127.0.0.1 172.23.195.181 functional-269100 localhost minikube]
	I0612 13:19:38.726751   10368 provision.go:177] copyRemoteCerts
	I0612 13:19:38.737399   10368 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 13:19:38.737399   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-269100 ).state
	I0612 13:19:40.809645   10368 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:19:40.809645   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:19:40.809728   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-269100 ).networkadapters[0]).ipaddresses[0]
	I0612 13:19:43.258664   10368 main.go:141] libmachine: [stdout =====>] : 172.23.195.181
	
	I0612 13:19:43.258664   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:19:43.270306   10368 sshutil.go:53] new ssh client: &{IP:172.23.195.181 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-269100\id_rsa Username:docker}
	I0612 13:19:43.374157   10368 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6367436s)
	I0612 13:19:43.374254   10368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0612 13:19:43.374356   10368 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 13:19:43.416694   10368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0612 13:19:43.425229   10368 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0612 13:19:43.476080   10368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0612 13:19:43.476650   10368 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0612 13:19:43.526352   10368 provision.go:87] duration metric: took 14.2109613s to configureAuth
	I0612 13:19:43.526352   10368 buildroot.go:189] setting minikube options for container-runtime
	I0612 13:19:43.526976   10368 config.go:182] Loaded profile config "functional-269100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 13:19:43.527060   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-269100 ).state
	I0612 13:19:45.626265   10368 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:19:45.626265   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:19:45.637574   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-269100 ).networkadapters[0]).ipaddresses[0]
	I0612 13:19:48.130709   10368 main.go:141] libmachine: [stdout =====>] : 172.23.195.181
	
	I0612 13:19:48.130709   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:19:48.144941   10368 main.go:141] libmachine: Using SSH client type: native
	I0612 13:19:48.146063   10368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.195.181 22 <nil> <nil>}
	I0612 13:19:48.146063   10368 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0612 13:19:48.277266   10368 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0612 13:19:48.277424   10368 buildroot.go:70] root file system type: tmpfs
	I0612 13:19:48.277424   10368 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0612 13:19:48.277424   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-269100 ).state
	I0612 13:19:50.327618   10368 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:19:50.327618   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:19:50.327618   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-269100 ).networkadapters[0]).ipaddresses[0]
	I0612 13:19:52.761376   10368 main.go:141] libmachine: [stdout =====>] : 172.23.195.181
	
	I0612 13:19:52.761376   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:19:52.778216   10368 main.go:141] libmachine: Using SSH client type: native
	I0612 13:19:52.778746   10368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.195.181 22 <nil> <nil>}
	I0612 13:19:52.778952   10368 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0612 13:19:52.935028   10368 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0612 13:19:52.935028   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-269100 ).state
	I0612 13:19:54.959777   10368 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:19:54.970717   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:19:54.971013   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-269100 ).networkadapters[0]).ipaddresses[0]
	I0612 13:19:57.408481   10368 main.go:141] libmachine: [stdout =====>] : 172.23.195.181
	
	I0612 13:19:57.408481   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:19:57.432664   10368 main.go:141] libmachine: Using SSH client type: native
	I0612 13:19:57.433195   10368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.195.181 22 <nil> <nil>}
	I0612 13:19:57.433195   10368 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0612 13:19:57.582397   10368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 13:19:57.582397   10368 machine.go:97] duration metric: took 42.5891803s to provisionDockerMachine
	I0612 13:19:57.582397   10368 start.go:293] postStartSetup for "functional-269100" (driver="hyperv")
	I0612 13:19:57.582397   10368 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 13:19:57.594258   10368 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 13:19:57.594258   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-269100 ).state
	I0612 13:19:59.620032   10368 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:19:59.620032   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:19:59.630868   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-269100 ).networkadapters[0]).ipaddresses[0]
	I0612 13:20:02.158364   10368 main.go:141] libmachine: [stdout =====>] : 172.23.195.181
	
	I0612 13:20:02.158364   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:20:02.169237   10368 sshutil.go:53] new ssh client: &{IP:172.23.195.181 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-269100\id_rsa Username:docker}
	I0612 13:20:02.274753   10368 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6804798s)
	I0612 13:20:02.285693   10368 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 13:20:02.293541   10368 command_runner.go:130] > NAME=Buildroot
	I0612 13:20:02.293714   10368 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0612 13:20:02.293714   10368 command_runner.go:130] > ID=buildroot
	I0612 13:20:02.293714   10368 command_runner.go:130] > VERSION_ID=2023.02.9
	I0612 13:20:02.293714   10368 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0612 13:20:02.293714   10368 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 13:20:02.293826   10368 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0612 13:20:02.294242   10368 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0612 13:20:02.295118   10368 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> 12802.pem in /etc/ssl/certs
	I0612 13:20:02.295335   10368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> /etc/ssl/certs/12802.pem
	I0612 13:20:02.296589   10368 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\1280\hosts -> hosts in /etc/test/nested/copy/1280
	I0612 13:20:02.296589   10368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\1280\hosts -> /etc/test/nested/copy/1280/hosts
	I0612 13:20:02.307138   10368 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1280
	I0612 13:20:02.325866   10368 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem --> /etc/ssl/certs/12802.pem (1708 bytes)
	I0612 13:20:02.375790   10368 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\1280\hosts --> /etc/test/nested/copy/1280/hosts (40 bytes)
	I0612 13:20:02.426883   10368 start.go:296] duration metric: took 4.844471s for postStartSetup
	I0612 13:20:02.426883   10368 fix.go:56] duration metric: took 50.1394594s for fixHost
	I0612 13:20:02.426883   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-269100 ).state
	I0612 13:20:04.475306   10368 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:20:04.475306   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:20:04.486228   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-269100 ).networkadapters[0]).ipaddresses[0]
	I0612 13:20:06.894311   10368 main.go:141] libmachine: [stdout =====>] : 172.23.195.181
	
	I0612 13:20:06.894311   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:20:06.911302   10368 main.go:141] libmachine: Using SSH client type: native
	I0612 13:20:06.911899   10368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.195.181 22 <nil> <nil>}
	I0612 13:20:06.912150   10368 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 13:20:07.046242   10368 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718223607.049162773
	
	I0612 13:20:07.046349   10368 fix.go:216] guest clock: 1718223607.049162773
	I0612 13:20:07.046349   10368 fix.go:229] Guest: 2024-06-12 13:20:07.049162773 -0700 PDT Remote: 2024-06-12 13:20:02.4268832 -0700 PDT m=+55.518316401 (delta=4.622279573s)
	I0612 13:20:07.046458   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-269100 ).state
	I0612 13:20:09.086633   10368 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:20:09.098724   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:20:09.098887   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-269100 ).networkadapters[0]).ipaddresses[0]
	I0612 13:20:11.557631   10368 main.go:141] libmachine: [stdout =====>] : 172.23.195.181
	
	I0612 13:20:11.557631   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:20:11.573802   10368 main.go:141] libmachine: Using SSH client type: native
	I0612 13:20:11.574534   10368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.195.181 22 <nil> <nil>}
	I0612 13:20:11.574534   10368 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1718223607
	I0612 13:20:11.717414   10368 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jun 12 20:20:07 UTC 2024
	
	I0612 13:20:11.717506   10368 fix.go:236] clock set: Wed Jun 12 20:20:07 UTC 2024
	 (err=<nil>)
	I0612 13:20:11.717506   10368 start.go:83] releasing machines lock for "functional-269100", held for 59.4300542s
	I0612 13:20:11.717609   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-269100 ).state
	I0612 13:20:13.771614   10368 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:20:13.782798   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:20:13.782933   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-269100 ).networkadapters[0]).ipaddresses[0]
	I0612 13:20:16.264223   10368 main.go:141] libmachine: [stdout =====>] : 172.23.195.181
	
	I0612 13:20:16.264223   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:20:16.280241   10368 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 13:20:16.280241   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-269100 ).state
	I0612 13:20:16.288902   10368 ssh_runner.go:195] Run: cat /version.json
	I0612 13:20:16.288902   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-269100 ).state
	I0612 13:20:18.448449   10368 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:20:18.448449   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:20:18.448449   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-269100 ).networkadapters[0]).ipaddresses[0]
	I0612 13:20:18.448449   10368 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:20:18.448449   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:20:18.448976   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-269100 ).networkadapters[0]).ipaddresses[0]
	I0612 13:20:21.146241   10368 main.go:141] libmachine: [stdout =====>] : 172.23.195.181
	
	I0612 13:20:21.157872   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:20:21.158209   10368 sshutil.go:53] new ssh client: &{IP:172.23.195.181 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-269100\id_rsa Username:docker}
	I0612 13:20:21.189731   10368 main.go:141] libmachine: [stdout =====>] : 172.23.195.181
	
	I0612 13:20:21.189731   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:20:21.190089   10368 sshutil.go:53] new ssh client: &{IP:172.23.195.181 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-269100\id_rsa Username:docker}
	I0612 13:20:21.252274   10368 command_runner.go:130] > {"iso_version": "v1.33.1-1718047936-19044", "kicbase_version": "v0.0.44-1718016726-19044", "minikube_version": "v1.33.1", "commit": "8a07c05cb41cba41fd6bf6981cdae9c899c82330"}
	I0612 13:20:21.252626   10368 ssh_runner.go:235] Completed: cat /version.json: (4.963581s)
	I0612 13:20:21.270060   10368 ssh_runner.go:195] Run: systemctl --version
	I0612 13:20:21.331709   10368 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0612 13:20:21.331709   10368 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0514525s)
	I0612 13:20:21.332001   10368 command_runner.go:130] > systemd 252 (252)
	I0612 13:20:21.332001   10368 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0612 13:20:21.343775   10368 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0612 13:20:21.352113   10368 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0612 13:20:21.354845   10368 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 13:20:21.367690   10368 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 13:20:21.387553   10368 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0612 13:20:21.387553   10368 start.go:494] detecting cgroup driver to use...
	I0612 13:20:21.387880   10368 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 13:20:21.419322   10368 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0612 13:20:21.442588   10368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0612 13:20:21.482566   10368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0612 13:20:21.504246   10368 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0612 13:20:21.517865   10368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0612 13:20:21.551906   10368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0612 13:20:21.592591   10368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0612 13:20:21.627945   10368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0612 13:20:21.669852   10368 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 13:20:21.710715   10368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0612 13:20:21.743652   10368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0612 13:20:21.781154   10368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0612 13:20:21.823500   10368 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 13:20:21.846198   10368 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0612 13:20:21.860273   10368 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 13:20:21.893751   10368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:20:22.189564   10368 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0612 13:20:22.225177   10368 start.go:494] detecting cgroup driver to use...
	I0612 13:20:22.239173   10368 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0612 13:20:22.263560   10368 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0612 13:20:22.263560   10368 command_runner.go:130] > [Unit]
	I0612 13:20:22.263942   10368 command_runner.go:130] > Description=Docker Application Container Engine
	I0612 13:20:22.263942   10368 command_runner.go:130] > Documentation=https://docs.docker.com
	I0612 13:20:22.263942   10368 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0612 13:20:22.263942   10368 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0612 13:20:22.263942   10368 command_runner.go:130] > StartLimitBurst=3
	I0612 13:20:22.263942   10368 command_runner.go:130] > StartLimitIntervalSec=60
	I0612 13:20:22.264065   10368 command_runner.go:130] > [Service]
	I0612 13:20:22.264065   10368 command_runner.go:130] > Type=notify
	I0612 13:20:22.264065   10368 command_runner.go:130] > Restart=on-failure
	I0612 13:20:22.264065   10368 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0612 13:20:22.264232   10368 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0612 13:20:22.264232   10368 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0612 13:20:22.264232   10368 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0612 13:20:22.264232   10368 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0612 13:20:22.264232   10368 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0612 13:20:22.264379   10368 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0612 13:20:22.264379   10368 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0612 13:20:22.264501   10368 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0612 13:20:22.264501   10368 command_runner.go:130] > ExecStart=
	I0612 13:20:22.264565   10368 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0612 13:20:22.264619   10368 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0612 13:20:22.264619   10368 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0612 13:20:22.264697   10368 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0612 13:20:22.264697   10368 command_runner.go:130] > LimitNOFILE=infinity
	I0612 13:20:22.264697   10368 command_runner.go:130] > LimitNPROC=infinity
	I0612 13:20:22.264769   10368 command_runner.go:130] > LimitCORE=infinity
	I0612 13:20:22.264829   10368 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0612 13:20:22.264829   10368 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0612 13:20:22.264829   10368 command_runner.go:130] > TasksMax=infinity
	I0612 13:20:22.264904   10368 command_runner.go:130] > TimeoutStartSec=0
	I0612 13:20:22.264904   10368 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0612 13:20:22.264904   10368 command_runner.go:130] > Delegate=yes
	I0612 13:20:22.264978   10368 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0612 13:20:22.265043   10368 command_runner.go:130] > KillMode=process
	I0612 13:20:22.265043   10368 command_runner.go:130] > [Install]
	I0612 13:20:22.265119   10368 command_runner.go:130] > WantedBy=multi-user.target
	I0612 13:20:22.279223   10368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 13:20:22.332128   10368 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 13:20:22.377501   10368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 13:20:22.421672   10368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0612 13:20:22.446085   10368 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 13:20:22.488000   10368 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0612 13:20:22.503885   10368 ssh_runner.go:195] Run: which cri-dockerd
	I0612 13:20:22.506663   10368 command_runner.go:130] > /usr/bin/cri-dockerd
	I0612 13:20:22.512429   10368 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0612 13:20:22.540908   10368 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0612 13:20:22.593268   10368 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0612 13:20:22.888920   10368 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0612 13:20:23.160858   10368 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0612 13:20:23.161103   10368 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0612 13:20:23.218679   10368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:20:23.492563   10368 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0612 13:20:36.301102   10368 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.8085001s)
	I0612 13:20:36.315897   10368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0612 13:20:36.364806   10368 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0612 13:20:36.416169   10368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0612 13:20:36.459502   10368 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0612 13:20:36.681262   10368 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0612 13:20:36.875603   10368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:20:37.096136   10368 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0612 13:20:37.141130   10368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0612 13:20:37.179278   10368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:20:37.365092   10368 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0612 13:20:37.511394   10368 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0612 13:20:37.524207   10368 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0612 13:20:37.533010   10368 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0612 13:20:37.533069   10368 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0612 13:20:37.533069   10368 command_runner.go:130] > Device: 0,22	Inode: 1503        Links: 1
	I0612 13:20:37.533160   10368 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0612 13:20:37.533160   10368 command_runner.go:130] > Access: 2024-06-12 20:20:37.394038443 +0000
	I0612 13:20:37.533160   10368 command_runner.go:130] > Modify: 2024-06-12 20:20:37.394038443 +0000
	I0612 13:20:37.533160   10368 command_runner.go:130] > Change: 2024-06-12 20:20:37.398038649 +0000
	I0612 13:20:37.533207   10368 command_runner.go:130] >  Birth: -
	I0612 13:20:37.533247   10368 start.go:562] Will wait 60s for crictl version
	I0612 13:20:37.546790   10368 ssh_runner.go:195] Run: which crictl
	I0612 13:20:37.548568   10368 command_runner.go:130] > /usr/bin/crictl
	I0612 13:20:37.565788   10368 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 13:20:37.614889   10368 command_runner.go:130] > Version:  0.1.0
	I0612 13:20:37.614951   10368 command_runner.go:130] > RuntimeName:  docker
	I0612 13:20:37.614951   10368 command_runner.go:130] > RuntimeVersion:  26.1.4
	I0612 13:20:37.614951   10368 command_runner.go:130] > RuntimeApiVersion:  v1
	I0612 13:20:37.615017   10368 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0612 13:20:37.624006   10368 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0612 13:20:37.654401   10368 command_runner.go:130] > 26.1.4
	I0612 13:20:37.665255   10368 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0612 13:20:37.695292   10368 command_runner.go:130] > 26.1.4
	I0612 13:20:37.699153   10368 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0612 13:20:37.699349   10368 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0612 13:20:37.703706   10368 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0612 13:20:37.703706   10368 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0612 13:20:37.703706   10368 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0612 13:20:37.703706   10368 ip.go:207] Found interface: {Index:16 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:56:a0:18 Flags:up|broadcast|multicast|running}
	I0612 13:20:37.707109   10368 ip.go:210] interface addr: fe80::52c5:dd8:dd1e:a400/64
	I0612 13:20:37.707109   10368 ip.go:210] interface addr: 172.23.192.1/20
	I0612 13:20:37.718204   10368 ssh_runner.go:195] Run: grep 172.23.192.1	host.minikube.internal$ /etc/hosts
	I0612 13:20:37.724106   10368 command_runner.go:130] > 172.23.192.1	host.minikube.internal
	I0612 13:20:37.724253   10368 kubeadm.go:877] updating cluster {Name:functional-269100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.30.1 ClusterName:functional-269100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.195.181 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 13:20:37.724253   10368 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0612 13:20:37.734098   10368 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0612 13:20:37.759818   10368 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0612 13:20:37.759902   10368 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0612 13:20:37.759902   10368 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 13:20:37.759902   10368 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0612 13:20:37.759902   10368 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0612 13:20:37.759902   10368 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0612 13:20:37.759972   10368 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0612 13:20:37.759972   10368 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 13:20:37.760063   10368 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0612 13:20:37.760063   10368 docker.go:615] Images already preloaded, skipping extraction
	I0612 13:20:37.769024   10368 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0612 13:20:37.791337   10368 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0612 13:20:37.791421   10368 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0612 13:20:37.791455   10368 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 13:20:37.791492   10368 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0612 13:20:37.791525   10368 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0612 13:20:37.791525   10368 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0612 13:20:37.791525   10368 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0612 13:20:37.791525   10368 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 13:20:37.791712   10368 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0612 13:20:37.791809   10368 cache_images.go:84] Images are preloaded, skipping loading
	I0612 13:20:37.791848   10368 kubeadm.go:928] updating node { 172.23.195.181 8441 v1.30.1 docker true true} ...
	I0612 13:20:37.791890   10368 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-269100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.195.181
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:functional-269100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 13:20:37.801801   10368 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0612 13:20:37.837709   10368 command_runner.go:130] > cgroupfs
	I0612 13:20:37.837968   10368 cni.go:84] Creating CNI manager for ""
	I0612 13:20:37.837994   10368 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0612 13:20:37.837994   10368 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 13:20:37.838133   10368 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.23.195.181 APIServerPort:8441 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-269100 NodeName:functional-269100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.23.195.181"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.23.195.181 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 13:20:37.838372   10368 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.23.195.181
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-269100"
	  kubeletExtraArgs:
	    node-ip: 172.23.195.181
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.23.195.181"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 13:20:37.850471   10368 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 13:20:37.858010   10368 command_runner.go:130] > kubeadm
	I0612 13:20:37.868532   10368 command_runner.go:130] > kubectl
	I0612 13:20:37.868532   10368 command_runner.go:130] > kubelet
	I0612 13:20:37.868652   10368 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 13:20:37.880837   10368 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 13:20:37.898644   10368 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0612 13:20:37.933559   10368 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 13:20:37.965393   10368 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0612 13:20:38.013014   10368 ssh_runner.go:195] Run: grep 172.23.195.181	control-plane.minikube.internal$ /etc/hosts
	I0612 13:20:38.019119   10368 command_runner.go:130] > 172.23.195.181	control-plane.minikube.internal
	I0612 13:20:38.032029   10368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:20:38.237382   10368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 13:20:38.270532   10368 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100 for IP: 172.23.195.181
	I0612 13:20:38.270694   10368 certs.go:194] generating shared ca certs ...
	I0612 13:20:38.270694   10368 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:20:38.271510   10368 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0612 13:20:38.272043   10368 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0612 13:20:38.272305   10368 certs.go:256] generating profile certs ...
	I0612 13:20:38.273199   10368 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.key
	I0612 13:20:38.273199   10368 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\apiserver.key.77c6ee80
	I0612 13:20:38.273782   10368 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\proxy-client.key
	I0612 13:20:38.273881   10368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0612 13:20:38.274031   10368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0612 13:20:38.274202   10368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0612 13:20:38.274389   10368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0612 13:20:38.274601   10368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0612 13:20:38.274746   10368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0612 13:20:38.274972   10368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0612 13:20:38.275079   10368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0612 13:20:38.275642   10368 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem (1338 bytes)
	W0612 13:20:38.276010   10368 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280_empty.pem, impossibly tiny 0 bytes
	I0612 13:20:38.276096   10368 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0612 13:20:38.276405   10368 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0612 13:20:38.276727   10368 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0612 13:20:38.276903   10368 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0612 13:20:38.277285   10368 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem (1708 bytes)
	I0612 13:20:38.277285   10368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> /usr/share/ca-certificates/12802.pem
	I0612 13:20:38.277788   10368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0612 13:20:38.277958   10368 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem -> /usr/share/ca-certificates/1280.pem
	I0612 13:20:38.278848   10368 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 13:20:38.326120   10368 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 13:20:38.371165   10368 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 13:20:38.429650   10368 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0612 13:20:38.514469   10368 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0612 13:20:38.587100   10368 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0612 13:20:38.661962   10368 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 13:20:38.718445   10368 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 13:20:38.774285   10368 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem --> /usr/share/ca-certificates/12802.pem (1708 bytes)
	I0612 13:20:38.827925   10368 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 13:20:38.891680   10368 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem --> /usr/share/ca-certificates/1280.pem (1338 bytes)
	I0612 13:20:38.955764   10368 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 13:20:39.011958   10368 ssh_runner.go:195] Run: openssl version
	I0612 13:20:39.020333   10368 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0612 13:20:39.032514   10368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1280.pem && ln -fs /usr/share/ca-certificates/1280.pem /etc/ssl/certs/1280.pem"
	I0612 13:20:39.086828   10368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1280.pem
	I0612 13:20:39.090203   10368 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 12 20:15 /usr/share/ca-certificates/1280.pem
	I0612 13:20:39.090203   10368 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:15 /usr/share/ca-certificates/1280.pem
	I0612 13:20:39.108717   10368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1280.pem
	I0612 13:20:39.110751   10368 command_runner.go:130] > 51391683
	I0612 13:20:39.129157   10368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1280.pem /etc/ssl/certs/51391683.0"
	I0612 13:20:39.167246   10368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12802.pem && ln -fs /usr/share/ca-certificates/12802.pem /etc/ssl/certs/12802.pem"
	I0612 13:20:39.201118   10368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12802.pem
	I0612 13:20:39.204982   10368 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 12 20:15 /usr/share/ca-certificates/12802.pem
	I0612 13:20:39.209057   10368 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:15 /usr/share/ca-certificates/12802.pem
	I0612 13:20:39.223441   10368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12802.pem
	I0612 13:20:39.232802   10368 command_runner.go:130] > 3ec20f2e
	I0612 13:20:39.245926   10368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12802.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 13:20:39.277738   10368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 13:20:39.310660   10368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 13:20:39.317488   10368 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 12 20:00 /usr/share/ca-certificates/minikubeCA.pem
	I0612 13:20:39.317488   10368 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:00 /usr/share/ca-certificates/minikubeCA.pem
	I0612 13:20:39.330123   10368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 13:20:39.334950   10368 command_runner.go:130] > b5213941
	I0612 13:20:39.351476   10368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 13:20:39.402892   10368 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 13:20:39.410986   10368 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 13:20:39.411119   10368 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0612 13:20:39.411196   10368 command_runner.go:130] > Device: 8,1	Inode: 1055058     Links: 1
	I0612 13:20:39.411196   10368 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0612 13:20:39.411283   10368 command_runner.go:130] > Access: 2024-06-12 20:17:59.415871132 +0000
	I0612 13:20:39.411283   10368 command_runner.go:130] > Modify: 2024-06-12 20:17:59.415871132 +0000
	I0612 13:20:39.411283   10368 command_runner.go:130] > Change: 2024-06-12 20:17:59.415871132 +0000
	I0612 13:20:39.411354   10368 command_runner.go:130] >  Birth: 2024-06-12 20:17:59.415871132 +0000
	I0612 13:20:39.426507   10368 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0612 13:20:39.441146   10368 command_runner.go:130] > Certificate will not expire
	I0612 13:20:39.456689   10368 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0612 13:20:39.468452   10368 command_runner.go:130] > Certificate will not expire
	I0612 13:20:39.484846   10368 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0612 13:20:39.520009   10368 command_runner.go:130] > Certificate will not expire
	I0612 13:20:39.533165   10368 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0612 13:20:39.538870   10368 command_runner.go:130] > Certificate will not expire
	I0612 13:20:39.556144   10368 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0612 13:20:39.558873   10368 command_runner.go:130] > Certificate will not expire
	I0612 13:20:39.579455   10368 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0612 13:20:39.593071   10368 command_runner.go:130] > Certificate will not expire
	I0612 13:20:39.596330   10368 kubeadm.go:391] StartCluster: {Name:functional-269100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.1 ClusterName:functional-269100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.195.181 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 13:20:39.606742   10368 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0612 13:20:39.651959   10368 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0612 13:20:39.674213   10368 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0612 13:20:39.674313   10368 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0612 13:20:39.674313   10368 command_runner.go:130] > /var/lib/minikube/etcd:
	I0612 13:20:39.674313   10368 command_runner.go:130] > member
	W0612 13:20:39.675569   10368 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0612 13:20:39.675569   10368 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0612 13:20:39.675569   10368 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0612 13:20:39.688570   10368 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0612 13:20:39.706294   10368 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0612 13:20:39.707587   10368 kubeconfig.go:125] found "functional-269100" server: "https://172.23.195.181:8441"
	I0612 13:20:39.708890   10368 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 13:20:39.709714   10368 kapi.go:59] client config for functional-269100: &rest.Config{Host:"https://172.23.195.181:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-269100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-269100\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x288e1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0612 13:20:39.711113   10368 cert_rotation.go:137] Starting client certificate rotation controller
	I0612 13:20:39.722995   10368 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0612 13:20:39.743889   10368 kubeadm.go:624] The running cluster does not require reconfiguration: 172.23.195.181
	I0612 13:20:39.744008   10368 kubeadm.go:1154] stopping kube-system containers ...
	I0612 13:20:39.755142   10368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0612 13:20:39.810910   10368 command_runner.go:130] > 5e57619115f0
	I0612 13:20:39.810981   10368 command_runner.go:130] > 58335f6ca672
	I0612 13:20:39.810981   10368 command_runner.go:130] > f5b1f14b8fef
	I0612 13:20:39.810981   10368 command_runner.go:130] > f60b6e2c9ca0
	I0612 13:20:39.810981   10368 command_runner.go:130] > 27e29d0ad258
	I0612 13:20:39.810981   10368 command_runner.go:130] > d6f38411bd8e
	I0612 13:20:39.810981   10368 command_runner.go:130] > bf936158b697
	I0612 13:20:39.810981   10368 command_runner.go:130] > 34d988411ef1
	I0612 13:20:39.810981   10368 command_runner.go:130] > ce3e331da64d
	I0612 13:20:39.810981   10368 command_runner.go:130] > e8d577728a90
	I0612 13:20:39.810981   10368 command_runner.go:130] > 8fa89279ff00
	I0612 13:20:39.810981   10368 command_runner.go:130] > 9b5d02a58f55
	I0612 13:20:39.810981   10368 command_runner.go:130] > 4f135acba992
	I0612 13:20:39.810981   10368 command_runner.go:130] > 3ab3d96f3951
	I0612 13:20:39.810981   10368 command_runner.go:130] > b38eb70a454c
	I0612 13:20:39.810981   10368 command_runner.go:130] > 63d9b50b1e60
	I0612 13:20:39.810981   10368 command_runner.go:130] > ef2e2729de97
	I0612 13:20:39.810981   10368 command_runner.go:130] > be369651ebe3
	I0612 13:20:39.810981   10368 command_runner.go:130] > 8e3e126deeab
	I0612 13:20:39.810981   10368 command_runner.go:130] > d2acc09b6461
	I0612 13:20:39.810981   10368 command_runner.go:130] > 9892a62fd0cb
	I0612 13:20:39.810981   10368 command_runner.go:130] > 3f34d95cace3
	I0612 13:20:39.810981   10368 command_runner.go:130] > 119863e4229c
	I0612 13:20:39.810981   10368 command_runner.go:130] > ac1046109077
	I0612 13:20:39.810981   10368 command_runner.go:130] > 9a10d1cf69ca
	I0612 13:20:39.810981   10368 docker.go:483] Stopping containers: [5e57619115f0 58335f6ca672 f5b1f14b8fef f60b6e2c9ca0 27e29d0ad258 d6f38411bd8e bf936158b697 34d988411ef1 ce3e331da64d e8d577728a90 8fa89279ff00 9b5d02a58f55 4f135acba992 3ab3d96f3951 b38eb70a454c 63d9b50b1e60 ef2e2729de97 be369651ebe3 8e3e126deeab d2acc09b6461 9892a62fd0cb 3f34d95cace3 119863e4229c ac1046109077 9a10d1cf69ca]
	I0612 13:20:39.821970   10368 ssh_runner.go:195] Run: docker stop 5e57619115f0 58335f6ca672 f5b1f14b8fef f60b6e2c9ca0 27e29d0ad258 d6f38411bd8e bf936158b697 34d988411ef1 ce3e331da64d e8d577728a90 8fa89279ff00 9b5d02a58f55 4f135acba992 3ab3d96f3951 b38eb70a454c 63d9b50b1e60 ef2e2729de97 be369651ebe3 8e3e126deeab d2acc09b6461 9892a62fd0cb 3f34d95cace3 119863e4229c ac1046109077 9a10d1cf69ca
	I0612 13:20:40.541421   10368 command_runner.go:130] > 5e57619115f0
	I0612 13:20:40.541464   10368 command_runner.go:130] > 58335f6ca672
	I0612 13:20:40.541464   10368 command_runner.go:130] > f5b1f14b8fef
	I0612 13:20:40.541464   10368 command_runner.go:130] > f60b6e2c9ca0
	I0612 13:20:40.541464   10368 command_runner.go:130] > 27e29d0ad258
	I0612 13:20:40.541464   10368 command_runner.go:130] > d6f38411bd8e
	I0612 13:20:40.541464   10368 command_runner.go:130] > bf936158b697
	I0612 13:20:40.541464   10368 command_runner.go:130] > 34d988411ef1
	I0612 13:20:40.541464   10368 command_runner.go:130] > ce3e331da64d
	I0612 13:20:40.541563   10368 command_runner.go:130] > e8d577728a90
	I0612 13:20:40.541563   10368 command_runner.go:130] > 8fa89279ff00
	I0612 13:20:40.541563   10368 command_runner.go:130] > 9b5d02a58f55
	I0612 13:20:40.541563   10368 command_runner.go:130] > 4f135acba992
	I0612 13:20:40.541563   10368 command_runner.go:130] > 3ab3d96f3951
	I0612 13:20:40.541563   10368 command_runner.go:130] > b38eb70a454c
	I0612 13:20:40.541563   10368 command_runner.go:130] > 63d9b50b1e60
	I0612 13:20:40.541563   10368 command_runner.go:130] > ef2e2729de97
	I0612 13:20:40.541563   10368 command_runner.go:130] > be369651ebe3
	I0612 13:20:40.541638   10368 command_runner.go:130] > 8e3e126deeab
	I0612 13:20:40.541638   10368 command_runner.go:130] > d2acc09b6461
	I0612 13:20:40.541703   10368 command_runner.go:130] > 9892a62fd0cb
	I0612 13:20:40.541703   10368 command_runner.go:130] > 3f34d95cace3
	I0612 13:20:40.541703   10368 command_runner.go:130] > 119863e4229c
	I0612 13:20:40.541703   10368 command_runner.go:130] > ac1046109077
	I0612 13:20:40.541703   10368 command_runner.go:130] > 9a10d1cf69ca
	I0612 13:20:40.554990   10368 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0612 13:20:40.631403   10368 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 13:20:40.652891   10368 command_runner.go:130] > -rw------- 1 root root 5651 Jun 12 20:18 /etc/kubernetes/admin.conf
	I0612 13:20:40.652891   10368 command_runner.go:130] > -rw------- 1 root root 5654 Jun 12 20:18 /etc/kubernetes/controller-manager.conf
	I0612 13:20:40.652891   10368 command_runner.go:130] > -rw------- 1 root root 2007 Jun 12 20:18 /etc/kubernetes/kubelet.conf
	I0612 13:20:40.652891   10368 command_runner.go:130] > -rw------- 1 root root 5602 Jun 12 20:18 /etc/kubernetes/scheduler.conf
	I0612 13:20:40.652891   10368 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5651 Jun 12 20:18 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Jun 12 20:18 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jun 12 20:18 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Jun 12 20:18 /etc/kubernetes/scheduler.conf
	
	I0612 13:20:40.664955   10368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0612 13:20:40.680886   10368 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0612 13:20:40.693256   10368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0612 13:20:40.708365   10368 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0612 13:20:40.724812   10368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0612 13:20:40.731812   10368 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0612 13:20:40.753683   10368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 13:20:40.783288   10368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0612 13:20:40.800353   10368 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0612 13:20:40.813238   10368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 13:20:40.843159   10368 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 13:20:40.859526   10368 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 13:20:40.935811   10368 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 13:20:40.936399   10368 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0612 13:20:40.936399   10368 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0612 13:20:40.936399   10368 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0612 13:20:40.936399   10368 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0612 13:20:40.936399   10368 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0612 13:20:40.936399   10368 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0612 13:20:40.936399   10368 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0612 13:20:40.936518   10368 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0612 13:20:40.936518   10368 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0612 13:20:40.936518   10368 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0612 13:20:40.936518   10368 command_runner.go:130] > [certs] Using the existing "sa" key
	I0612 13:20:40.936627   10368 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 13:20:41.819725   10368 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 13:20:41.819725   10368 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0612 13:20:41.819807   10368 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/super-admin.conf"
	I0612 13:20:41.819807   10368 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0612 13:20:41.819807   10368 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 13:20:41.819807   10368 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 13:20:41.819874   10368 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0612 13:20:42.126869   10368 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 13:20:42.126869   10368 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 13:20:42.126869   10368 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0612 13:20:42.126869   10368 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 13:20:42.231906   10368 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 13:20:42.232190   10368 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 13:20:42.232190   10368 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 13:20:42.232190   10368 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 13:20:42.232321   10368 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0612 13:20:42.348621   10368 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 13:20:42.348742   10368 api_server.go:52] waiting for apiserver process to appear ...
	I0612 13:20:42.361887   10368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 13:20:42.864190   10368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 13:20:43.364264   10368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 13:20:43.867254   10368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 13:20:44.368552   10368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 13:20:44.381381   10368 command_runner.go:130] > 5855
	I0612 13:20:44.394128   10368 api_server.go:72] duration metric: took 2.0453798s to wait for apiserver process to appear ...
	I0612 13:20:44.394128   10368 api_server.go:88] waiting for apiserver healthz status ...
	I0612 13:20:44.394128   10368 api_server.go:253] Checking apiserver healthz at https://172.23.195.181:8441/healthz ...
	I0612 13:20:47.082710   10368 api_server.go:279] https://172.23.195.181:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0612 13:20:47.083209   10368 api_server.go:103] status: https://172.23.195.181:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0612 13:20:47.083209   10368 api_server.go:253] Checking apiserver healthz at https://172.23.195.181:8441/healthz ...
	I0612 13:20:47.121944   10368 api_server.go:279] https://172.23.195.181:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0612 13:20:47.129057   10368 api_server.go:103] status: https://172.23.195.181:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0612 13:20:47.406976   10368 api_server.go:253] Checking apiserver healthz at https://172.23.195.181:8441/healthz ...
	I0612 13:20:47.414585   10368 api_server.go:279] https://172.23.195.181:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 13:20:47.415709   10368 api_server.go:103] status: https://172.23.195.181:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 13:20:47.905024   10368 api_server.go:253] Checking apiserver healthz at https://172.23.195.181:8441/healthz ...
	I0612 13:20:47.915111   10368 api_server.go:279] https://172.23.195.181:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 13:20:47.915164   10368 api_server.go:103] status: https://172.23.195.181:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 13:20:48.398484   10368 api_server.go:253] Checking apiserver healthz at https://172.23.195.181:8441/healthz ...
	I0612 13:20:48.405491   10368 api_server.go:279] https://172.23.195.181:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 13:20:48.407978   10368 api_server.go:103] status: https://172.23.195.181:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 13:20:48.913026   10368 api_server.go:253] Checking apiserver healthz at https://172.23.195.181:8441/healthz ...
	I0612 13:20:48.919919   10368 api_server.go:279] https://172.23.195.181:8441/healthz returned 200:
	ok
	I0612 13:20:48.922665   10368 round_trippers.go:463] GET https://172.23.195.181:8441/version
	I0612 13:20:48.922792   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:48.922792   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:48.922853   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:48.938135   10368 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0612 13:20:48.938135   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:48.938135   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:48.938135   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:48.938135   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:48.938135   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:48.938135   10368 round_trippers.go:580]     Content-Length: 263
	I0612 13:20:48.938135   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:48 GMT
	I0612 13:20:48.939892   10368 round_trippers.go:580]     Audit-Id: 1c27ffbb-d847-43e5-aac2-85be369d1553
	I0612 13:20:48.939892   10368 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0612 13:20:48.940157   10368 api_server.go:141] control plane version: v1.30.1
	I0612 13:20:48.940157   10368 api_server.go:131] duration metric: took 4.5460144s to wait for apiserver health ...
	I0612 13:20:48.940157   10368 cni.go:84] Creating CNI manager for ""
	I0612 13:20:48.940157   10368 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0612 13:20:48.943504   10368 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0612 13:20:48.957117   10368 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0612 13:20:48.977082   10368 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0612 13:20:49.019358   10368 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 13:20:49.019358   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods
	I0612 13:20:49.019358   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:49.019358   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:49.019358   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:49.033381   10368 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0612 13:20:49.040598   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:49.040598   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:49.040598   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:49.040660   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:49.040660   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:49.040660   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:49 GMT
	I0612 13:20:49.040707   10368 round_trippers.go:580]     Audit-Id: d053195c-f821-45af-b3c0-e294bb8f8c85
	I0612 13:20:49.049512   10368 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"599"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8b5dd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"3a86a91c-36de-41f5-b243-00743e29acba","resourceVersion":"544","creationTimestamp":"2024-06-12T20:18:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a982b302-3928-460e-824d-887ada3b8b98","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a982b302-3928-460e-824d-887ada3b8b98\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52101 chars]
	I0612 13:20:49.056569   10368 system_pods.go:59] 7 kube-system pods found
	I0612 13:20:49.056569   10368 system_pods.go:61] "coredns-7db6d8ff4d-8b5dd" [3a86a91c-36de-41f5-b243-00743e29acba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0612 13:20:49.056716   10368 system_pods.go:61] "etcd-functional-269100" [e0ce37ea-d250-471f-91dc-c6d5c1dbc26a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0612 13:20:49.056716   10368 system_pods.go:61] "kube-apiserver-functional-269100" [60fe25bc-0c96-4afe-9cb1-dfc324dc7ac3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0612 13:20:49.056716   10368 system_pods.go:61] "kube-controller-manager-functional-269100" [e706c833-6890-462a-b7a3-240a9fd2470a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0612 13:20:49.056782   10368 system_pods.go:61] "kube-proxy-n648c" [4f6f5e07-4ced-484d-a47c-1af2e55ce102] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0612 13:20:49.056782   10368 system_pods.go:61] "kube-scheduler-functional-269100" [78bc6f0f-601f-4bca-9874-f57640a6545d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0612 13:20:49.056782   10368 system_pods.go:61] "storage-provisioner" [a5945727-bd26-4c6e-8afe-1ae05bcd4944] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0612 13:20:49.056782   10368 system_pods.go:74] duration metric: took 37.4235ms to wait for pod list to return data ...
	I0612 13:20:49.056782   10368 node_conditions.go:102] verifying NodePressure condition ...
	I0612 13:20:49.056782   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes
	I0612 13:20:49.056782   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:49.056782   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:49.056782   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:49.058109   10368 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 13:20:49.058109   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:49.063764   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:49.063764   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:49.063764   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:49.063764   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:49.063764   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:49 GMT
	I0612 13:20:49.063764   10368 round_trippers.go:580]     Audit-Id: 83e0d654-29ae-4869-a364-d14ff328094f
	I0612 13:20:49.063977   10368 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"599"},"items":[{"metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4841 chars]
	I0612 13:20:49.065287   10368 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 13:20:49.065347   10368 node_conditions.go:123] node cpu capacity is 2
	I0612 13:20:49.065347   10368 node_conditions.go:105] duration metric: took 8.5651ms to run NodePressure ...
	I0612 13:20:49.065347   10368 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 13:20:49.453926   10368 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0612 13:20:49.455654   10368 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0612 13:20:49.455874   10368 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0612 13:20:49.456679   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0612 13:20:49.457208   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:49.457270   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:49.457421   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:49.462827   10368 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:20:49.462827   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:49.466262   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:49 GMT
	I0612 13:20:49.466262   10368 round_trippers.go:580]     Audit-Id: 6ad8ecc4-688d-47e9-926d-06f4cdbd98e9
	I0612 13:20:49.466262   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:49.466262   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:49.466262   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:49.466262   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:49.467801   10368 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"603"},"items":[{"metadata":{"name":"etcd-functional-269100","namespace":"kube-system","uid":"e0ce37ea-d250-471f-91dc-c6d5c1dbc26a","resourceVersion":"548","creationTimestamp":"2024-06-12T20:18:10Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.195.181:2379","kubernetes.io/config.hash":"7782b140ee8d830e857a9b7f130b7d9f","kubernetes.io/config.mirror":"7782b140ee8d830e857a9b7f130b7d9f","kubernetes.io/config.seen":"2024-06-12T20:18:10.244895797Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 31381 chars]
	I0612 13:20:49.471131   10368 kubeadm.go:733] kubelet initialised
	I0612 13:20:49.471131   10368 kubeadm.go:734] duration metric: took 14.4519ms waiting for restarted kubelet to initialise ...
	I0612 13:20:49.471131   10368 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 13:20:49.471668   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods
	I0612 13:20:49.471668   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:49.471668   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:49.471668   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:49.482100   10368 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0612 13:20:49.483223   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:49.483223   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:49.483223   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:49.483223   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:49.483223   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:49.483223   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:49 GMT
	I0612 13:20:49.483223   10368 round_trippers.go:580]     Audit-Id: 476e85d9-0f9f-4c5b-8dc4-a0735e2e066e
	I0612 13:20:49.484392   10368 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"603"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8b5dd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"3a86a91c-36de-41f5-b243-00743e29acba","resourceVersion":"544","creationTimestamp":"2024-06-12T20:18:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a982b302-3928-460e-824d-887ada3b8b98","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a982b302-3928-460e-824d-887ada3b8b98\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52101 chars]
	I0612 13:20:49.487017   10368 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-8b5dd" in "kube-system" namespace to be "Ready" ...
	I0612 13:20:49.487079   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8b5dd
	I0612 13:20:49.487079   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:49.487079   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:49.487079   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:49.490200   10368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 13:20:49.490200   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:49.490200   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:49.490437   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:49 GMT
	I0612 13:20:49.490437   10368 round_trippers.go:580]     Audit-Id: 23f5121a-6b71-4f61-b126-359babe4bfd2
	I0612 13:20:49.490437   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:49.490437   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:49.490437   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:49.490637   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8b5dd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"3a86a91c-36de-41f5-b243-00743e29acba","resourceVersion":"544","creationTimestamp":"2024-06-12T20:18:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a982b302-3928-460e-824d-887ada3b8b98","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a982b302-3928-460e-824d-887ada3b8b98\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6505 chars]
	I0612 13:20:49.491274   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:20:49.491331   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:49.491331   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:49.491331   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:49.493953   10368 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 13:20:49.493953   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:49.493953   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:49.493953   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:49.493953   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:49.493953   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:49.493953   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:49 GMT
	I0612 13:20:49.493953   10368 round_trippers.go:580]     Audit-Id: 4265f276-66e4-4d8f-a21d-35eed0f02afc
	I0612 13:20:49.494636   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:20:49.991152   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8b5dd
	I0612 13:20:49.991254   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:49.991254   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:49.991331   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:49.994984   10368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 13:20:49.995512   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:49.995512   10368 round_trippers.go:580]     Audit-Id: 0ad19020-869b-4fe1-9016-bd2ce0963942
	I0612 13:20:49.995512   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:49.995512   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:49.995603   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:49.995603   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:49.995672   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:49 GMT
	I0612 13:20:49.995929   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8b5dd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"3a86a91c-36de-41f5-b243-00743e29acba","resourceVersion":"605","creationTimestamp":"2024-06-12T20:18:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a982b302-3928-460e-824d-887ada3b8b98","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a982b302-3928-460e-824d-887ada3b8b98\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0612 13:20:49.996678   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:20:49.996734   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:49.996734   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:49.996734   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:49.999415   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:20:49.999415   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:49.999415   10368 round_trippers.go:580]     Audit-Id: 716ae423-857c-41f6-9501-ea3e1971f7bd
	I0612 13:20:49.999415   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:49.999415   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:49.999415   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:49.999415   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:49.999415   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:50 GMT
	I0612 13:20:49.999685   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:20:50.495207   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8b5dd
	I0612 13:20:50.495207   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:50.495207   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:50.495207   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:50.495767   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:20:50.500067   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:50.500067   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:50.500067   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:50.500067   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:50.500067   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:50.500067   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:50 GMT
	I0612 13:20:50.500067   10368 round_trippers.go:580]     Audit-Id: 32f93767-bb2e-423a-babf-96a888fe02b6
	I0612 13:20:50.500361   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8b5dd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"3a86a91c-36de-41f5-b243-00743e29acba","resourceVersion":"605","creationTimestamp":"2024-06-12T20:18:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a982b302-3928-460e-824d-887ada3b8b98","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a982b302-3928-460e-824d-887ada3b8b98\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0612 13:20:50.501176   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:20:50.501176   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:50.501176   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:50.501176   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:50.501555   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:20:50.504297   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:50.504297   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:50.504297   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:50.504297   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:50.504297   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:50.504297   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:50 GMT
	I0612 13:20:50.504297   10368 round_trippers.go:580]     Audit-Id: e07df610-f2bd-409b-9990-9efecc182e4c
	I0612 13:20:50.504593   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:20:50.996191   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8b5dd
	I0612 13:20:50.996281   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:50.996281   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:50.996281   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:51.002522   10368 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:20:51.002522   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:51.003093   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:51 GMT
	I0612 13:20:51.003093   10368 round_trippers.go:580]     Audit-Id: 4038a082-7e95-47a8-aac5-6bf595f30603
	I0612 13:20:51.003093   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:51.003093   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:51.003093   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:51.003093   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:51.003355   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8b5dd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"3a86a91c-36de-41f5-b243-00743e29acba","resourceVersion":"605","creationTimestamp":"2024-06-12T20:18:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a982b302-3928-460e-824d-887ada3b8b98","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a982b302-3928-460e-824d-887ada3b8b98\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0612 13:20:51.004353   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:20:51.004388   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:51.004388   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:51.004431   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:51.005864   10368 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 13:20:51.007793   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:51.007793   10368 round_trippers.go:580]     Audit-Id: cf10a858-df41-469d-8d49-1dc354d0c0bb
	I0612 13:20:51.007793   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:51.007793   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:51.007793   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:51.007793   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:51.007793   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:51 GMT
	I0612 13:20:51.007793   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:20:51.494994   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8b5dd
	I0612 13:20:51.495091   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:51.495091   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:51.495091   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:51.495492   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:20:51.495492   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:51.498833   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:51.498833   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:51.498833   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:51.498833   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:51 GMT
	I0612 13:20:51.498833   10368 round_trippers.go:580]     Audit-Id: 9bb9ea6b-7c44-4092-84b7-13f890849fc3
	I0612 13:20:51.498833   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:51.499036   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8b5dd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"3a86a91c-36de-41f5-b243-00743e29acba","resourceVersion":"605","creationTimestamp":"2024-06-12T20:18:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a982b302-3928-460e-824d-887ada3b8b98","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a982b302-3928-460e-824d-887ada3b8b98\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0612 13:20:51.500007   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:20:51.500059   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:51.500059   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:51.500059   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:51.500267   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:20:51.500267   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:51.500267   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:51.500267   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:51.502931   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:51 GMT
	I0612 13:20:51.502931   10368 round_trippers.go:580]     Audit-Id: a2a98ce3-a684-459f-9452-04e58eff8b6b
	I0612 13:20:51.502931   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:51.502931   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:51.503201   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:20:51.503657   10368 pod_ready.go:102] pod "coredns-7db6d8ff4d-8b5dd" in "kube-system" namespace has status "Ready":"False"
	I0612 13:20:51.991966   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8b5dd
	I0612 13:20:51.995090   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:51.995090   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:51.995090   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:52.000103   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:20:52.000103   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:52.000103   10368 round_trippers.go:580]     Audit-Id: 93d22edb-dedd-43c8-bdc0-d094a45d32d1
	I0612 13:20:52.000103   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:52.000103   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:52.000188   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:52.000188   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:52.000188   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:52 GMT
	I0612 13:20:52.001149   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8b5dd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"3a86a91c-36de-41f5-b243-00743e29acba","resourceVersion":"605","creationTimestamp":"2024-06-12T20:18:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a982b302-3928-460e-824d-887ada3b8b98","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a982b302-3928-460e-824d-887ada3b8b98\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0612 13:20:52.001956   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:20:52.001956   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:52.002050   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:52.002050   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:52.002215   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:20:52.004550   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:52.004642   10368 round_trippers.go:580]     Audit-Id: 834829a3-0d9a-4487-bade-bfd2d428933b
	I0612 13:20:52.004642   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:52.004642   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:52.004642   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:52.004642   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:52.004642   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:52 GMT
	I0612 13:20:52.004852   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:20:52.497917   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8b5dd
	I0612 13:20:52.497961   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:52.498029   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:52.498029   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:52.501353   10368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 13:20:52.502253   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:52.502253   10368 round_trippers.go:580]     Audit-Id: 09962c82-4d5e-416a-8206-767a47c238aa
	I0612 13:20:52.502253   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:52.502253   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:52.502253   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:52.502334   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:52.502363   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:52 GMT
	I0612 13:20:52.502589   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8b5dd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"3a86a91c-36de-41f5-b243-00743e29acba","resourceVersion":"605","creationTimestamp":"2024-06-12T20:18:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a982b302-3928-460e-824d-887ada3b8b98","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a982b302-3928-460e-824d-887ada3b8b98\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0612 13:20:52.503407   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:20:52.503527   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:52.503527   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:52.503527   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:52.503753   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:20:52.506548   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:52.506548   10368 round_trippers.go:580]     Audit-Id: f362dd14-6315-4b25-9cc6-10515640fc8f
	I0612 13:20:52.506548   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:52.506548   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:52.506548   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:52.506548   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:52.506548   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:52 GMT
	I0612 13:20:52.506982   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:20:53.002943   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8b5dd
	I0612 13:20:53.002943   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:53.002943   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:53.002943   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:53.003969   10368 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 13:20:53.003969   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:53.003969   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:53.003969   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:53.003969   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:53.003969   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:53 GMT
	I0612 13:20:53.003969   10368 round_trippers.go:580]     Audit-Id: 2f0f9d15-5caa-447b-b48b-7f5730ab2047
	I0612 13:20:53.003969   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:53.009775   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8b5dd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"3a86a91c-36de-41f5-b243-00743e29acba","resourceVersion":"605","creationTimestamp":"2024-06-12T20:18:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a982b302-3928-460e-824d-887ada3b8b98","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a982b302-3928-460e-824d-887ada3b8b98\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0612 13:20:53.010595   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:20:53.010670   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:53.010670   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:53.010670   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:53.014188   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:20:53.014227   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:53.014227   10368 round_trippers.go:580]     Audit-Id: 5159f753-7927-4da5-9c69-f8cc1e34bafc
	I0612 13:20:53.014227   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:53.014227   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:53.014227   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:53.014227   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:53.014227   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:53 GMT
	I0612 13:20:53.014227   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:20:53.488433   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8b5dd
	I0612 13:20:53.488433   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:53.488433   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:53.488433   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:53.489298   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:20:53.493264   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:53.493264   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:53.493264   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:53.493264   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:53 GMT
	I0612 13:20:53.493264   10368 round_trippers.go:580]     Audit-Id: 42a5a83d-0562-40d4-9bfb-626981cfa5db
	I0612 13:20:53.493264   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:53.493264   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:53.493264   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8b5dd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"3a86a91c-36de-41f5-b243-00743e29acba","resourceVersion":"605","creationTimestamp":"2024-06-12T20:18:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a982b302-3928-460e-824d-887ada3b8b98","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a982b302-3928-460e-824d-887ada3b8b98\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0612 13:20:53.494332   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:20:53.494332   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:53.494332   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:53.494332   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:53.494577   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:20:53.494577   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:53.494577   10368 round_trippers.go:580]     Audit-Id: 4f5f599d-dec4-4547-98d3-14dba79e2a20
	I0612 13:20:53.494577   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:53.494577   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:53.494577   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:53.494577   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:53.494577   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:53 GMT
	I0612 13:20:53.497816   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:20:54.001754   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8b5dd
	I0612 13:20:54.001754   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:54.001837   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:54.001837   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:54.002144   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:20:54.002144   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:54.002144   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:54.006696   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:54.006696   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:54.006696   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:54 GMT
	I0612 13:20:54.006696   10368 round_trippers.go:580]     Audit-Id: 75135d19-95b6-4f22-a213-9dad8bb7ede6
	I0612 13:20:54.006696   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:54.006869   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8b5dd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"3a86a91c-36de-41f5-b243-00743e29acba","resourceVersion":"605","creationTimestamp":"2024-06-12T20:18:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a982b302-3928-460e-824d-887ada3b8b98","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a982b302-3928-460e-824d-887ada3b8b98\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0612 13:20:54.008143   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:20:54.008143   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:54.008143   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:54.008143   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:54.008478   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:20:54.011547   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:54.011547   10368 round_trippers.go:580]     Audit-Id: c2610d33-654d-46ec-b147-adb0babc9773
	I0612 13:20:54.011547   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:54.011547   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:54.011547   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:54.011547   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:54.011547   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:54 GMT
	I0612 13:20:54.011843   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:20:54.012327   10368 pod_ready.go:102] pod "coredns-7db6d8ff4d-8b5dd" in "kube-system" namespace has status "Ready":"False"
	I0612 13:20:54.497345   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8b5dd
	I0612 13:20:54.497345   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:54.497429   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:54.497429   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:54.497811   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:20:54.501418   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:54.501418   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:54.501418   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:54 GMT
	I0612 13:20:54.501418   10368 round_trippers.go:580]     Audit-Id: f7e71eba-485d-4df1-b592-a580011a89ef
	I0612 13:20:54.501418   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:54.501418   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:54.501418   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:54.501710   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8b5dd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"3a86a91c-36de-41f5-b243-00743e29acba","resourceVersion":"605","creationTimestamp":"2024-06-12T20:18:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a982b302-3928-460e-824d-887ada3b8b98","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a982b302-3928-460e-824d-887ada3b8b98\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0612 13:20:54.502605   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:20:54.502672   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:54.502672   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:54.502672   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:54.504968   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:20:54.504968   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:54.504968   10368 round_trippers.go:580]     Audit-Id: 4df465ea-c24a-4a00-be2e-f9c6abcf9fed
	I0612 13:20:54.505039   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:54.505039   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:54.505039   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:54.505039   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:54.505039   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:54 GMT
	I0612 13:20:54.505464   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:20:55.001429   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8b5dd
	I0612 13:20:55.001429   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:55.001429   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:55.001429   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:55.002003   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:20:55.002003   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:55.002003   10368 round_trippers.go:580]     Audit-Id: 5d3b7b15-cdaf-482a-ad37-374f5cc61884
	I0612 13:20:55.006214   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:55.006214   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:55.006214   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:55.006214   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:55.006214   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:55 GMT
	I0612 13:20:55.006786   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8b5dd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"3a86a91c-36de-41f5-b243-00743e29acba","resourceVersion":"605","creationTimestamp":"2024-06-12T20:18:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a982b302-3928-460e-824d-887ada3b8b98","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a982b302-3928-460e-824d-887ada3b8b98\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0612 13:20:55.007881   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:20:55.007914   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:55.007945   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:55.007945   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:55.013608   10368 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:20:55.013608   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:55.013608   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:55.013608   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:55.013608   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:55.013608   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:55.013608   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:55 GMT
	I0612 13:20:55.013608   10368 round_trippers.go:580]     Audit-Id: 464a839a-e84c-48cb-828d-c9c658b6279f
	I0612 13:20:55.014186   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:20:55.501192   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8b5dd
	I0612 13:20:55.501192   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:55.501192   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:55.501192   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:55.501780   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:20:55.506318   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:55.506318   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:55.506318   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:55.506318   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:55.506318   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:55.506318   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:55 GMT
	I0612 13:20:55.506318   10368 round_trippers.go:580]     Audit-Id: 43306513-9aca-494d-811a-c83a3d7fcf53
	I0612 13:20:55.506318   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8b5dd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"3a86a91c-36de-41f5-b243-00743e29acba","resourceVersion":"605","creationTimestamp":"2024-06-12T20:18:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a982b302-3928-460e-824d-887ada3b8b98","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a982b302-3928-460e-824d-887ada3b8b98\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0612 13:20:55.507444   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:20:55.507513   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:55.507513   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:55.507513   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:55.510865   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:20:55.510865   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:55.510865   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:55.510865   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:55 GMT
	I0612 13:20:55.510865   10368 round_trippers.go:580]     Audit-Id: c19c6c3b-289e-4f50-9d3a-491a6b8db2e5
	I0612 13:20:55.510865   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:55.510865   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:55.510865   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:55.510865   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:20:55.989580   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8b5dd
	I0612 13:20:55.989580   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:55.989580   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:55.989580   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:55.990111   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:20:55.990111   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:55.990111   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:55.990111   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:55.990111   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:55.990111   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:55 GMT
	I0612 13:20:55.993985   10368 round_trippers.go:580]     Audit-Id: 2dbddeae-6e7f-4192-abdc-ac37ac2b383e
	I0612 13:20:55.993985   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:55.994163   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8b5dd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"3a86a91c-36de-41f5-b243-00743e29acba","resourceVersion":"605","creationTimestamp":"2024-06-12T20:18:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a982b302-3928-460e-824d-887ada3b8b98","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a982b302-3928-460e-824d-887ada3b8b98\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0612 13:20:55.994925   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:20:55.994925   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:55.994925   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:55.994925   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:55.995601   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:20:55.995601   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:55.995601   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:55.995601   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:55.995601   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:55.995601   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:56 GMT
	I0612 13:20:55.995601   10368 round_trippers.go:580]     Audit-Id: 25962df7-d407-4867-b3ea-82765b087d16
	I0612 13:20:55.995601   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:55.998915   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:20:56.497860   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8b5dd
	I0612 13:20:56.497906   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:56.497939   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:56.497939   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:56.498695   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:20:56.498695   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:56.501695   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:56 GMT
	I0612 13:20:56.501695   10368 round_trippers.go:580]     Audit-Id: 35815d7d-89d5-46b8-836b-171fdb4e00ff
	I0612 13:20:56.501695   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:56.501695   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:56.501695   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:56.501695   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:56.502010   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8b5dd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"3a86a91c-36de-41f5-b243-00743e29acba","resourceVersion":"605","creationTimestamp":"2024-06-12T20:18:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a982b302-3928-460e-824d-887ada3b8b98","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a982b302-3928-460e-824d-887ada3b8b98\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0612 13:20:56.502795   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:20:56.502795   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:56.502795   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:56.502795   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:56.503361   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:20:56.503361   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:56.503361   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:56 GMT
	I0612 13:20:56.503361   10368 round_trippers.go:580]     Audit-Id: 30eeec0e-faa2-40a5-822d-ebcc45abba9b
	I0612 13:20:56.503361   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:56.503361   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:56.503361   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:56.505969   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:56.506207   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:20:56.506207   10368 pod_ready.go:102] pod "coredns-7db6d8ff4d-8b5dd" in "kube-system" namespace has status "Ready":"False"
	I0612 13:20:56.993965   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8b5dd
	I0612 13:20:56.996663   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:56.996739   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:56.996739   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:56.997420   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:20:56.997420   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:56.997420   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:56.997420   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:56.997420   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:56.997420   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:56.997420   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:57 GMT
	I0612 13:20:56.997420   10368 round_trippers.go:580]     Audit-Id: d706f57e-7db0-4bed-8177-17190d37bf5b
	I0612 13:20:57.000307   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8b5dd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"3a86a91c-36de-41f5-b243-00743e29acba","resourceVersion":"605","creationTimestamp":"2024-06-12T20:18:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a982b302-3928-460e-824d-887ada3b8b98","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a982b302-3928-460e-824d-887ada3b8b98\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0612 13:20:57.001142   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:20:57.001142   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:57.001142   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:57.001142   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:57.004060   10368 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 13:20:57.004060   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:57.004060   10368 round_trippers.go:580]     Audit-Id: 64a7cee0-a151-4c00-926d-4653e119d411
	I0612 13:20:57.004060   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:57.004060   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:57.004608   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:57.004608   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:57.004683   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:57 GMT
	I0612 13:20:57.004701   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:20:57.494967   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8b5dd
	I0612 13:20:57.495081   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:57.495214   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:57.495214   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:57.495501   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:20:57.495501   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:57.495501   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:57 GMT
	I0612 13:20:57.500360   10368 round_trippers.go:580]     Audit-Id: 78741971-1256-4d38-b516-224fa205637e
	I0612 13:20:57.500430   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:57.500430   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:57.500430   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:57.500510   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:57.500721   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8b5dd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"3a86a91c-36de-41f5-b243-00743e29acba","resourceVersion":"611","creationTimestamp":"2024-06-12T20:18:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a982b302-3928-460e-824d-887ada3b8b98","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a982b302-3928-460e-824d-887ada3b8b98\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6452 chars]
	I0612 13:20:57.500970   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:20:57.501550   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:57.501550   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:57.501550   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:57.512753   10368 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0612 13:20:57.512753   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:57.512753   10368 round_trippers.go:580]     Audit-Id: 6d421db4-95d2-4d8d-824e-56970746b657
	I0612 13:20:57.512753   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:57.512753   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:57.512753   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:57.512753   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:57.512753   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:57 GMT
	I0612 13:20:57.512753   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:20:57.513483   10368 pod_ready.go:92] pod "coredns-7db6d8ff4d-8b5dd" in "kube-system" namespace has status "Ready":"True"
	I0612 13:20:57.513483   10368 pod_ready.go:81] duration metric: took 8.0263786s for pod "coredns-7db6d8ff4d-8b5dd" in "kube-system" namespace to be "Ready" ...
	I0612 13:20:57.513483   10368 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-269100" in "kube-system" namespace to be "Ready" ...
	I0612 13:20:57.513483   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/etcd-functional-269100
	I0612 13:20:57.513483   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:57.513483   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:57.513483   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:57.515793   10368 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 13:20:57.515793   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:57.515793   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:57.515793   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:57.515793   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:57.515793   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:57 GMT
	I0612 13:20:57.517135   10368 round_trippers.go:580]     Audit-Id: 7dc00e44-fa69-4907-9858-3b6173d690fd
	I0612 13:20:57.517135   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:57.517264   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-269100","namespace":"kube-system","uid":"e0ce37ea-d250-471f-91dc-c6d5c1dbc26a","resourceVersion":"610","creationTimestamp":"2024-06-12T20:18:10Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.195.181:2379","kubernetes.io/config.hash":"7782b140ee8d830e857a9b7f130b7d9f","kubernetes.io/config.mirror":"7782b140ee8d830e857a9b7f130b7d9f","kubernetes.io/config.seen":"2024-06-12T20:18:10.244895797Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6380 chars]
	I0612 13:20:57.517980   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:20:57.517980   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:57.517980   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:57.517980   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:57.518265   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:20:57.518265   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:57.518265   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:57.518265   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:57.518265   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:57.518265   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:57 GMT
	I0612 13:20:57.518265   10368 round_trippers.go:580]     Audit-Id: 32eee929-5c1b-4a73-8fad-33dc78336c56
	I0612 13:20:57.518265   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:57.520694   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:20:57.521089   10368 pod_ready.go:92] pod "etcd-functional-269100" in "kube-system" namespace has status "Ready":"True"
	I0612 13:20:57.521209   10368 pod_ready.go:81] duration metric: took 7.7265ms for pod "etcd-functional-269100" in "kube-system" namespace to be "Ready" ...
	I0612 13:20:57.521209   10368 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-269100" in "kube-system" namespace to be "Ready" ...
	I0612 13:20:57.521339   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-269100
	I0612 13:20:57.521436   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:57.521436   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:57.521436   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:57.521655   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:20:57.521655   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:57.521655   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:57 GMT
	I0612 13:20:57.521655   10368 round_trippers.go:580]     Audit-Id: aca80541-920d-4231-982c-1f69091cd859
	I0612 13:20:57.521655   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:57.521655   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:57.524194   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:57.524194   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:57.524288   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-269100","namespace":"kube-system","uid":"60fe25bc-0c96-4afe-9cb1-dfc324dc7ac3","resourceVersion":"608","creationTimestamp":"2024-06-12T20:18:09Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.195.181:8441","kubernetes.io/config.hash":"83916be9a9cb886186be47e309104ec9","kubernetes.io/config.mirror":"83916be9a9cb886186be47e309104ec9","kubernetes.io/config.seen":"2024-06-12T20:18:02.839843795Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8059 chars]
	I0612 13:20:57.525156   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:20:57.525156   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:57.525242   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:57.525242   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:57.525468   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:20:57.525468   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:57.525468   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:57.525468   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:57.525468   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:57.528318   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:57 GMT
	I0612 13:20:57.528318   10368 round_trippers.go:580]     Audit-Id: 4ab39a16-16af-40f9-a6ef-c563d9b126b5
	I0612 13:20:57.528318   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:57.528546   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:20:57.528850   10368 pod_ready.go:92] pod "kube-apiserver-functional-269100" in "kube-system" namespace has status "Ready":"True"
	I0612 13:20:57.528850   10368 pod_ready.go:81] duration metric: took 7.6405ms for pod "kube-apiserver-functional-269100" in "kube-system" namespace to be "Ready" ...
	I0612 13:20:57.528850   10368 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-269100" in "kube-system" namespace to be "Ready" ...
	I0612 13:20:57.528850   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-269100
	I0612 13:20:57.528850   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:57.528850   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:57.528850   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:57.529563   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:20:57.529563   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:57.529563   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:57.529563   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:57.529563   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:57.529563   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:57.532346   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:57 GMT
	I0612 13:20:57.532346   10368 round_trippers.go:580]     Audit-Id: 08d2b52e-7804-40b2-8e73-be4dcf36d861
	I0612 13:20:57.532735   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-269100","namespace":"kube-system","uid":"e706c833-6890-462a-b7a3-240a9fd2470a","resourceVersion":"557","creationTimestamp":"2024-06-12T20:18:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d7d12408752e1bb974ae4e17ddb793d4","kubernetes.io/config.mirror":"d7d12408752e1bb974ae4e17ddb793d4","kubernetes.io/config.seen":"2024-06-12T20:18:02.839844795Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7739 chars]
	I0612 13:20:57.533468   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:20:57.533503   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:57.533553   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:57.533553   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:57.533795   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:20:57.533795   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:57.533795   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:57.533795   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:57.533795   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:57.533795   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:57 GMT
	I0612 13:20:57.533795   10368 round_trippers.go:580]     Audit-Id: 069b5fe2-4248-4f2c-aa9a-5ccca7dd77a0
	I0612 13:20:57.533795   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:57.533795   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:20:58.033496   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-269100
	I0612 13:20:58.033496   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:58.033496   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:58.033496   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:58.038646   10368 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:20:58.038835   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:58.038835   10368 round_trippers.go:580]     Audit-Id: 13650c78-7574-48ee-80dc-7bdbe0ebf26d
	I0612 13:20:58.038835   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:58.038835   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:58.038835   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:58.038835   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:58.038835   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:58 GMT
	I0612 13:20:58.038917   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-269100","namespace":"kube-system","uid":"e706c833-6890-462a-b7a3-240a9fd2470a","resourceVersion":"557","creationTimestamp":"2024-06-12T20:18:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d7d12408752e1bb974ae4e17ddb793d4","kubernetes.io/config.mirror":"d7d12408752e1bb974ae4e17ddb793d4","kubernetes.io/config.seen":"2024-06-12T20:18:02.839844795Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7739 chars]
	I0612 13:20:58.040326   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:20:58.040326   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:58.040326   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:58.040445   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:58.046008   10368 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:20:58.046008   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:58.046008   10368 round_trippers.go:580]     Audit-Id: a8fe1523-dcbf-4950-afa1-31185c70e81a
	I0612 13:20:58.046008   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:58.046008   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:58.046008   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:58.046008   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:58.046008   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:58 GMT
	I0612 13:20:58.046008   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:20:58.538539   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-269100
	I0612 13:20:58.538599   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:58.538599   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:58.538692   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:58.539026   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:20:58.539026   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:58.543349   10368 round_trippers.go:580]     Audit-Id: 32914fa9-0ed4-4725-8ce2-86d1d567e44a
	I0612 13:20:58.543349   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:58.543349   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:58.543349   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:58.543349   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:58.543349   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:58 GMT
	I0612 13:20:58.544116   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-269100","namespace":"kube-system","uid":"e706c833-6890-462a-b7a3-240a9fd2470a","resourceVersion":"557","creationTimestamp":"2024-06-12T20:18:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d7d12408752e1bb974ae4e17ddb793d4","kubernetes.io/config.mirror":"d7d12408752e1bb974ae4e17ddb793d4","kubernetes.io/config.seen":"2024-06-12T20:18:02.839844795Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7739 chars]
	I0612 13:20:58.544734   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:20:58.544734   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:58.544734   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:58.544734   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:58.550411   10368 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:20:58.550466   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:58.550466   10368 round_trippers.go:580]     Audit-Id: 140f8b21-98b8-42f4-9220-e69028c40f60
	I0612 13:20:58.550466   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:58.550510   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:58.550510   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:58.550562   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:58.550562   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:58 GMT
	I0612 13:20:58.551526   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:20:59.041070   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-269100
	I0612 13:20:59.041070   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:59.041070   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:59.041070   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:59.041789   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:20:59.041789   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:59.041789   10368 round_trippers.go:580]     Audit-Id: d285c761-b56f-4945-a0e5-a829bff12ec3
	I0612 13:20:59.041789   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:59.041789   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:59.041789   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:59.041789   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:59.041789   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:59 GMT
	I0612 13:20:59.046230   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-269100","namespace":"kube-system","uid":"e706c833-6890-462a-b7a3-240a9fd2470a","resourceVersion":"616","creationTimestamp":"2024-06-12T20:18:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d7d12408752e1bb974ae4e17ddb793d4","kubernetes.io/config.mirror":"d7d12408752e1bb974ae4e17ddb793d4","kubernetes.io/config.seen":"2024-06-12T20:18:02.839844795Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7477 chars]
	I0612 13:20:59.046944   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:20:59.046944   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:59.046944   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:59.046944   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:59.052227   10368 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:20:59.052227   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:59.052227   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:59.052227   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:59.052227   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:59.052227   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:59 GMT
	I0612 13:20:59.052227   10368 round_trippers.go:580]     Audit-Id: 196e1c84-12bd-4526-a11f-9a9d56664668
	I0612 13:20:59.052227   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:59.052821   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:20:59.053046   10368 pod_ready.go:92] pod "kube-controller-manager-functional-269100" in "kube-system" namespace has status "Ready":"True"
	I0612 13:20:59.053046   10368 pod_ready.go:81] duration metric: took 1.5241918s for pod "kube-controller-manager-functional-269100" in "kube-system" namespace to be "Ready" ...
	I0612 13:20:59.053046   10368 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-n648c" in "kube-system" namespace to be "Ready" ...
	I0612 13:20:59.053046   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/kube-proxy-n648c
	I0612 13:20:59.053046   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:59.053046   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:59.053046   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:59.057446   10368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:20:59.057446   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:59.057446   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:59.057446   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:59.057446   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:59 GMT
	I0612 13:20:59.057446   10368 round_trippers.go:580]     Audit-Id: 70d88cc3-2014-4dfd-bc63-ec3c486f2d04
	I0612 13:20:59.057446   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:59.057446   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:59.058613   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-n648c","generateName":"kube-proxy-","namespace":"kube-system","uid":"4f6f5e07-4ced-484d-a47c-1af2e55ce102","resourceVersion":"606","creationTimestamp":"2024-06-12T20:18:24Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d7be40b9-7c48-4a96-b5e2-e63d6851e21e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d7be40b9-7c48-4a96-b5e2-e63d6851e21e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6180 chars]
	I0612 13:20:59.059568   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:20:59.059568   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:59.060116   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:59.060116   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:59.063086   10368 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 13:20:59.063437   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:59.063497   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:59.063497   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:59.063551   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:59.063585   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:59.063585   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:59 GMT
	I0612 13:20:59.063585   10368 round_trippers.go:580]     Audit-Id: 6e207686-db12-4054-8682-1ef0495282ca
	I0612 13:20:59.064568   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:20:59.064643   10368 pod_ready.go:92] pod "kube-proxy-n648c" in "kube-system" namespace has status "Ready":"True"
	I0612 13:20:59.064643   10368 pod_ready.go:81] duration metric: took 11.5969ms for pod "kube-proxy-n648c" in "kube-system" namespace to be "Ready" ...
	I0612 13:20:59.064643   10368 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-269100" in "kube-system" namespace to be "Ready" ...
	I0612 13:20:59.065360   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-269100
	I0612 13:20:59.065360   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:59.065360   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:59.065360   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:59.065735   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:20:59.065735   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:59.069533   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:59.069533   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:59.069533   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:59.069533   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:59 GMT
	I0612 13:20:59.069533   10368 round_trippers.go:580]     Audit-Id: 210ff368-81e8-40b2-ba5b-122060453285
	I0612 13:20:59.069533   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:59.069826   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-269100","namespace":"kube-system","uid":"78bc6f0f-601f-4bca-9874-f57640a6545d","resourceVersion":"551","creationTimestamp":"2024-06-12T20:18:10Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a7496fffb9e28e7144da51b95c59b4e9","kubernetes.io/config.mirror":"a7496fffb9e28e7144da51b95c59b4e9","kubernetes.io/config.seen":"2024-06-12T20:18:10.244901797Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5703 chars]
	I0612 13:20:59.097024   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:20:59.097098   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:59.097098   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:59.097098   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:59.099098   10368 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 13:20:59.100137   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:59.100191   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:59.100191   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:59.100240   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:59.100240   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:59.100240   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:59 GMT
	I0612 13:20:59.100240   10368 round_trippers.go:580]     Audit-Id: 9cb927b1-d756-4c73-b795-cdb768ddaef4
	I0612 13:20:59.100502   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:20:59.571387   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-269100
	I0612 13:20:59.571387   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:59.571387   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:59.571387   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:59.571915   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:20:59.571915   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:59.571915   10368 round_trippers.go:580]     Audit-Id: 1197ee41-bb0d-4e40-82fa-6a28e58f98f3
	I0612 13:20:59.571915   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:59.571915   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:59.576921   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:59.576921   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:59.576921   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:59 GMT
	I0612 13:20:59.577119   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-269100","namespace":"kube-system","uid":"78bc6f0f-601f-4bca-9874-f57640a6545d","resourceVersion":"551","creationTimestamp":"2024-06-12T20:18:10Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a7496fffb9e28e7144da51b95c59b4e9","kubernetes.io/config.mirror":"a7496fffb9e28e7144da51b95c59b4e9","kubernetes.io/config.seen":"2024-06-12T20:18:10.244901797Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5703 chars]
	I0612 13:20:59.577311   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:20:59.577311   10368 round_trippers.go:469] Request Headers:
	I0612 13:20:59.577311   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:20:59.577311   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:20:59.586263   10368 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0612 13:20:59.586263   10368 round_trippers.go:577] Response Headers:
	I0612 13:20:59.587387   10368 round_trippers.go:580]     Audit-Id: 1c8d6d6d-d879-4dfb-abb8-8b51e7199489
	I0612 13:20:59.587387   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:20:59.587387   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:20:59.587387   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:20:59.587432   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:20:59.587432   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:20:59 GMT
	I0612 13:20:59.587503   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:21:00.079398   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-269100
	I0612 13:21:00.079678   10368 round_trippers.go:469] Request Headers:
	I0612 13:21:00.079678   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:21:00.079678   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:21:00.080072   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:21:00.084033   10368 round_trippers.go:577] Response Headers:
	I0612 13:21:00.084033   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:21:00.084033   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:21:00.084130   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:21:00.084130   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:21:00 GMT
	I0612 13:21:00.084130   10368 round_trippers.go:580]     Audit-Id: 68671008-cf01-4055-9e05-4729fbf5def2
	I0612 13:21:00.084130   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:21:00.084237   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-269100","namespace":"kube-system","uid":"78bc6f0f-601f-4bca-9874-f57640a6545d","resourceVersion":"551","creationTimestamp":"2024-06-12T20:18:10Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a7496fffb9e28e7144da51b95c59b4e9","kubernetes.io/config.mirror":"a7496fffb9e28e7144da51b95c59b4e9","kubernetes.io/config.seen":"2024-06-12T20:18:10.244901797Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5703 chars]
	I0612 13:21:00.085124   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:21:00.085124   10368 round_trippers.go:469] Request Headers:
	I0612 13:21:00.085124   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:21:00.085124   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:21:00.085422   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:21:00.087886   10368 round_trippers.go:577] Response Headers:
	I0612 13:21:00.087886   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:21:00.087886   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:21:00.087886   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:21:00.087990   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:21:00 GMT
	I0612 13:21:00.087990   10368 round_trippers.go:580]     Audit-Id: 993bd6c6-0a83-480e-9aed-9e804b9c417a
	I0612 13:21:00.087990   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:21:00.088355   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:21:00.574075   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-269100
	I0612 13:21:00.574075   10368 round_trippers.go:469] Request Headers:
	I0612 13:21:00.574075   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:21:00.574075   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:21:00.574678   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:21:00.574678   10368 round_trippers.go:577] Response Headers:
	I0612 13:21:00.578290   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:21:00.578290   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:21:00.578290   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:21:00.578290   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:21:00 GMT
	I0612 13:21:00.578290   10368 round_trippers.go:580]     Audit-Id: 8ed6d846-bb5e-419b-9e42-52055cc9a979
	I0612 13:21:00.578290   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:21:00.578750   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-269100","namespace":"kube-system","uid":"78bc6f0f-601f-4bca-9874-f57640a6545d","resourceVersion":"551","creationTimestamp":"2024-06-12T20:18:10Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a7496fffb9e28e7144da51b95c59b4e9","kubernetes.io/config.mirror":"a7496fffb9e28e7144da51b95c59b4e9","kubernetes.io/config.seen":"2024-06-12T20:18:10.244901797Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5703 chars]
	I0612 13:21:00.579517   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:21:00.579583   10368 round_trippers.go:469] Request Headers:
	I0612 13:21:00.579583   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:21:00.579583   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:21:00.579792   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:21:00.582215   10368 round_trippers.go:577] Response Headers:
	I0612 13:21:00.582215   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:21:00.582215   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:21:00.582215   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:21:00.582215   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:21:00 GMT
	I0612 13:21:00.582215   10368 round_trippers.go:580]     Audit-Id: 151d8a5e-86b8-47b3-9a1f-f5b06218d6b8
	I0612 13:21:00.582358   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:21:00.582463   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:21:01.080364   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-269100
	I0612 13:21:01.080364   10368 round_trippers.go:469] Request Headers:
	I0612 13:21:01.080364   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:21:01.080633   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:21:01.084510   10368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 13:21:01.084510   10368 round_trippers.go:577] Response Headers:
	I0612 13:21:01.085483   10368 round_trippers.go:580]     Audit-Id: 002d23af-1cc6-4b7e-b665-5d509407a7e7
	I0612 13:21:01.085483   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:21:01.085483   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:21:01.085483   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:21:01.085483   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:21:01.085483   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:21:01 GMT
	I0612 13:21:01.086121   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-269100","namespace":"kube-system","uid":"78bc6f0f-601f-4bca-9874-f57640a6545d","resourceVersion":"622","creationTimestamp":"2024-06-12T20:18:10Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a7496fffb9e28e7144da51b95c59b4e9","kubernetes.io/config.mirror":"a7496fffb9e28e7144da51b95c59b4e9","kubernetes.io/config.seen":"2024-06-12T20:18:10.244901797Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5459 chars]
	I0612 13:21:01.087178   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:21:01.087178   10368 round_trippers.go:469] Request Headers:
	I0612 13:21:01.087178   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:21:01.087178   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:21:01.092976   10368 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:21:01.092976   10368 round_trippers.go:577] Response Headers:
	I0612 13:21:01.092976   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:21:01 GMT
	I0612 13:21:01.092976   10368 round_trippers.go:580]     Audit-Id: 99213258-d286-4d2a-ad6e-6d8dc6f447b9
	I0612 13:21:01.092976   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:21:01.092976   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:21:01.092976   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:21:01.092976   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:21:01.094461   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:21:01.094461   10368 pod_ready.go:92] pod "kube-scheduler-functional-269100" in "kube-system" namespace has status "Ready":"True"
	I0612 13:21:01.095045   10368 pod_ready.go:81] duration metric: took 2.0303956s for pod "kube-scheduler-functional-269100" in "kube-system" namespace to be "Ready" ...
	I0612 13:21:01.095045   10368 pod_ready.go:38] duration metric: took 11.6238784s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 13:21:01.095214   10368 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0612 13:21:01.115969   10368 command_runner.go:130] > -16
	I0612 13:21:01.116343   10368 ops.go:34] apiserver oom_adj: -16
	I0612 13:21:01.116407   10368 kubeadm.go:591] duration metric: took 21.4407717s to restartPrimaryControlPlane
	I0612 13:21:01.116407   10368 kubeadm.go:393] duration metric: took 21.5200544s to StartCluster
	I0612 13:21:01.116407   10368 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:21:01.116407   10368 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 13:21:01.118047   10368 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:21:01.120233   10368 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0612 13:21:01.120359   10368 addons.go:69] Setting storage-provisioner=true in profile "functional-269100"
	I0612 13:21:01.120233   10368 start.go:234] Will wait 6m0s for node &{Name: IP:172.23.195.181 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0612 13:21:01.125629   10368 out.go:177] * Verifying Kubernetes components...
	I0612 13:21:01.120359   10368 addons.go:234] Setting addon storage-provisioner=true in "functional-269100"
	I0612 13:21:01.120359   10368 addons.go:69] Setting default-storageclass=true in profile "functional-269100"
	I0612 13:21:01.120816   10368 config.go:182] Loaded profile config "functional-269100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	W0612 13:21:01.127735   10368 addons.go:243] addon storage-provisioner should already be in state true
	I0612 13:21:01.127735   10368 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-269100"
	I0612 13:21:01.127735   10368 host.go:66] Checking if "functional-269100" exists ...
	I0612 13:21:01.129775   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-269100 ).state
	I0612 13:21:01.130503   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-269100 ).state
	I0612 13:21:01.146160   10368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:21:01.425360   10368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 13:21:01.453626   10368 node_ready.go:35] waiting up to 6m0s for node "functional-269100" to be "Ready" ...
	I0612 13:21:01.453812   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:21:01.453812   10368 round_trippers.go:469] Request Headers:
	I0612 13:21:01.453812   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:21:01.453812   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:21:01.463232   10368 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0612 13:21:01.463232   10368 round_trippers.go:577] Response Headers:
	I0612 13:21:01.463232   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:21:01.463232   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:21:01 GMT
	I0612 13:21:01.463232   10368 round_trippers.go:580]     Audit-Id: c4a5e7cb-bff2-4ed0-aac8-8f2a2123b42b
	I0612 13:21:01.463232   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:21:01.463232   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:21:01.463232   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:21:01.463858   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:21:01.464483   10368 node_ready.go:49] node "functional-269100" has status "Ready":"True"
	I0612 13:21:01.464483   10368 node_ready.go:38] duration metric: took 10.742ms for node "functional-269100" to be "Ready" ...
	I0612 13:21:01.464483   10368 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 13:21:01.464483   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods
	I0612 13:21:01.464483   10368 round_trippers.go:469] Request Headers:
	I0612 13:21:01.464483   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:21:01.464483   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:21:01.467448   10368 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 13:21:01.469810   10368 round_trippers.go:577] Response Headers:
	I0612 13:21:01.469908   10368 round_trippers.go:580]     Audit-Id: d66cc3af-570d-4a53-82d2-705a6d95c741
	I0612 13:21:01.469908   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:21:01.470006   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:21:01.470114   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:21:01.470342   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:21:01.470479   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:21:01 GMT
	I0612 13:21:01.472243   10368 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"622"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8b5dd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"3a86a91c-36de-41f5-b243-00743e29acba","resourceVersion":"611","creationTimestamp":"2024-06-12T20:18:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a982b302-3928-460e-824d-887ada3b8b98","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a982b302-3928-460e-824d-887ada3b8b98\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50674 chars]
	I0612 13:21:01.474755   10368 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8b5dd" in "kube-system" namespace to be "Ready" ...
	I0612 13:21:01.475930   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8b5dd
	I0612 13:21:01.475930   10368 round_trippers.go:469] Request Headers:
	I0612 13:21:01.475930   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:21:01.475930   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:21:01.479710   10368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 13:21:01.479807   10368 round_trippers.go:577] Response Headers:
	I0612 13:21:01.479887   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:21:01.479887   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:21:01.479887   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:21:01.479887   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:21:01.479887   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:21:01 GMT
	I0612 13:21:01.479951   10368 round_trippers.go:580]     Audit-Id: 4c34e3cb-496a-4991-b931-01031b06cd4e
	I0612 13:21:01.479951   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8b5dd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"3a86a91c-36de-41f5-b243-00743e29acba","resourceVersion":"611","creationTimestamp":"2024-06-12T20:18:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a982b302-3928-460e-824d-887ada3b8b98","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a982b302-3928-460e-824d-887ada3b8b98\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6452 chars]
	I0612 13:21:01.508776   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:21:01.508776   10368 round_trippers.go:469] Request Headers:
	I0612 13:21:01.508776   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:21:01.508776   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:21:01.509451   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:21:01.512880   10368 round_trippers.go:577] Response Headers:
	I0612 13:21:01.512880   10368 round_trippers.go:580]     Audit-Id: fc883149-4787-4cb7-87ae-cb9c33191630
	I0612 13:21:01.512880   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:21:01.512880   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:21:01.512880   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:21:01.512880   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:21:01.512880   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:21:01 GMT
	I0612 13:21:01.513296   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:21:01.513370   10368 pod_ready.go:92] pod "coredns-7db6d8ff4d-8b5dd" in "kube-system" namespace has status "Ready":"True"
	I0612 13:21:01.513370   10368 pod_ready.go:81] duration metric: took 38.085ms for pod "coredns-7db6d8ff4d-8b5dd" in "kube-system" namespace to be "Ready" ...
	I0612 13:21:01.513370   10368 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-269100" in "kube-system" namespace to be "Ready" ...
	I0612 13:21:01.701882   10368 request.go:629] Waited for 187.5587ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/etcd-functional-269100
	I0612 13:21:01.702022   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/etcd-functional-269100
	I0612 13:21:01.702022   10368 round_trippers.go:469] Request Headers:
	I0612 13:21:01.702128   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:21:01.702128   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:21:01.702536   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:21:01.709646   10368 round_trippers.go:577] Response Headers:
	I0612 13:21:01.709646   10368 round_trippers.go:580]     Audit-Id: 022966b5-3b66-46ac-a480-178eb24abc6d
	I0612 13:21:01.709646   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:21:01.709741   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:21:01.709741   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:21:01.709741   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:21:01.709741   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:21:01 GMT
	I0612 13:21:01.709968   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-269100","namespace":"kube-system","uid":"e0ce37ea-d250-471f-91dc-c6d5c1dbc26a","resourceVersion":"610","creationTimestamp":"2024-06-12T20:18:10Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.195.181:2379","kubernetes.io/config.hash":"7782b140ee8d830e857a9b7f130b7d9f","kubernetes.io/config.mirror":"7782b140ee8d830e857a9b7f130b7d9f","kubernetes.io/config.seen":"2024-06-12T20:18:10.244895797Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6380 chars]
	I0612 13:21:01.909824   10368 request.go:629] Waited for 199.2587ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:21:01.909824   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:21:01.909824   10368 round_trippers.go:469] Request Headers:
	I0612 13:21:01.909824   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:21:01.909824   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:21:01.914068   10368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:21:01.914154   10368 round_trippers.go:577] Response Headers:
	I0612 13:21:01.914154   10368 round_trippers.go:580]     Audit-Id: bea9f47c-18c3-434b-a77f-8a48712316b8
	I0612 13:21:01.914154   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:21:01.914154   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:21:01.914154   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:21:01.914154   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:21:01.914229   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:21:01 GMT
	I0612 13:21:01.914229   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:21:01.914982   10368 pod_ready.go:92] pod "etcd-functional-269100" in "kube-system" namespace has status "Ready":"True"
	I0612 13:21:01.914982   10368 pod_ready.go:81] duration metric: took 401.0542ms for pod "etcd-functional-269100" in "kube-system" namespace to be "Ready" ...
	I0612 13:21:01.914982   10368 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-269100" in "kube-system" namespace to be "Ready" ...
	I0612 13:21:02.104460   10368 request.go:629] Waited for 189.2941ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-269100
	I0612 13:21:02.104460   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-269100
	I0612 13:21:02.104460   10368 round_trippers.go:469] Request Headers:
	I0612 13:21:02.104460   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:21:02.104460   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:21:02.105072   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:21:02.108597   10368 round_trippers.go:577] Response Headers:
	I0612 13:21:02.108597   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:21:02 GMT
	I0612 13:21:02.108597   10368 round_trippers.go:580]     Audit-Id: b3a53911-6ec4-4d43-aa2c-cf6aefa80a3c
	I0612 13:21:02.108597   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:21:02.108597   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:21:02.108597   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:21:02.108866   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:21:02.108929   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-269100","namespace":"kube-system","uid":"60fe25bc-0c96-4afe-9cb1-dfc324dc7ac3","resourceVersion":"608","creationTimestamp":"2024-06-12T20:18:09Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.195.181:8441","kubernetes.io/config.hash":"83916be9a9cb886186be47e309104ec9","kubernetes.io/config.mirror":"83916be9a9cb886186be47e309104ec9","kubernetes.io/config.seen":"2024-06-12T20:18:02.839843795Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8059 chars]
	I0612 13:21:02.297225   10368 request.go:629] Waited for 186.9767ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:21:02.297442   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:21:02.297442   10368 round_trippers.go:469] Request Headers:
	I0612 13:21:02.297442   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:21:02.297510   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:21:02.297753   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:21:02.301319   10368 round_trippers.go:577] Response Headers:
	I0612 13:21:02.301319   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:21:02 GMT
	I0612 13:21:02.301319   10368 round_trippers.go:580]     Audit-Id: 31fccd22-ee34-411f-8932-86d36facedbd
	I0612 13:21:02.301319   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:21:02.301319   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:21:02.301319   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:21:02.301319   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:21:02.301319   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:21:02.301991   10368 pod_ready.go:92] pod "kube-apiserver-functional-269100" in "kube-system" namespace has status "Ready":"True"
	I0612 13:21:02.301991   10368 pod_ready.go:81] duration metric: took 387.0085ms for pod "kube-apiserver-functional-269100" in "kube-system" namespace to be "Ready" ...
	I0612 13:21:02.301991   10368 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-269100" in "kube-system" namespace to be "Ready" ...
	I0612 13:21:02.505415   10368 request.go:629] Waited for 203.2667ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-269100
	I0612 13:21:02.505635   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-269100
	I0612 13:21:02.505635   10368 round_trippers.go:469] Request Headers:
	I0612 13:21:02.505738   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:21:02.505738   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:21:02.506388   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:21:02.506388   10368 round_trippers.go:577] Response Headers:
	I0612 13:21:02.506388   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:21:02.506388   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:21:02.506388   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:21:02.509003   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:21:02.509003   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:21:02 GMT
	I0612 13:21:02.509003   10368 round_trippers.go:580]     Audit-Id: bf79f46b-745a-4684-b18e-d315d3648b33
	I0612 13:21:02.509322   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-269100","namespace":"kube-system","uid":"e706c833-6890-462a-b7a3-240a9fd2470a","resourceVersion":"616","creationTimestamp":"2024-06-12T20:18:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d7d12408752e1bb974ae4e17ddb793d4","kubernetes.io/config.mirror":"d7d12408752e1bb974ae4e17ddb793d4","kubernetes.io/config.seen":"2024-06-12T20:18:02.839844795Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7477 chars]
	I0612 13:21:02.701864   10368 request.go:629] Waited for 191.6273ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:21:02.702035   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:21:02.702035   10368 round_trippers.go:469] Request Headers:
	I0612 13:21:02.702035   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:21:02.702035   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:21:02.702783   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:21:02.702783   10368 round_trippers.go:577] Response Headers:
	I0612 13:21:02.702783   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:21:02.702783   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:21:02.702783   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:21:02.706085   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:21:02.706085   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:21:02 GMT
	I0612 13:21:02.706085   10368 round_trippers.go:580]     Audit-Id: bb82b4bd-61c0-4cf3-ba27-c5ef83a430ef
	I0612 13:21:02.706380   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:21:02.707001   10368 pod_ready.go:92] pod "kube-controller-manager-functional-269100" in "kube-system" namespace has status "Ready":"True"
	I0612 13:21:02.707053   10368 pod_ready.go:81] duration metric: took 405.0607ms for pod "kube-controller-manager-functional-269100" in "kube-system" namespace to be "Ready" ...
	I0612 13:21:02.707053   10368 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-n648c" in "kube-system" namespace to be "Ready" ...
	I0612 13:21:02.895415   10368 request.go:629] Waited for 188.0622ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/kube-proxy-n648c
	I0612 13:21:02.895640   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/kube-proxy-n648c
	I0612 13:21:02.895640   10368 round_trippers.go:469] Request Headers:
	I0612 13:21:02.895730   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:21:02.895730   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:21:02.896333   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:21:02.899933   10368 round_trippers.go:577] Response Headers:
	I0612 13:21:02.899933   10368 round_trippers.go:580]     Audit-Id: 91731a99-aa2e-42dd-b45a-c6c51c442b9a
	I0612 13:21:02.899933   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:21:02.899933   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:21:02.899933   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:21:02.899933   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:21:02.899933   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:21:02 GMT
	I0612 13:21:02.900487   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-n648c","generateName":"kube-proxy-","namespace":"kube-system","uid":"4f6f5e07-4ced-484d-a47c-1af2e55ce102","resourceVersion":"606","creationTimestamp":"2024-06-12T20:18:24Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d7be40b9-7c48-4a96-b5e2-e63d6851e21e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d7be40b9-7c48-4a96-b5e2-e63d6851e21e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6180 chars]
	I0612 13:21:03.102227   10368 request.go:629] Waited for 200.9368ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:21:03.102581   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:21:03.102581   10368 round_trippers.go:469] Request Headers:
	I0612 13:21:03.102581   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:21:03.102768   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:21:03.103024   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:21:03.106530   10368 round_trippers.go:577] Response Headers:
	I0612 13:21:03.106530   10368 round_trippers.go:580]     Audit-Id: 56ddc458-f306-45bc-8bdc-a1cf3a5b9386
	I0612 13:21:03.106530   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:21:03.106530   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:21:03.106530   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:21:03.106612   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:21:03.106612   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:21:03 GMT
	I0612 13:21:03.106683   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:21:03.107258   10368 pod_ready.go:92] pod "kube-proxy-n648c" in "kube-system" namespace has status "Ready":"True"
	I0612 13:21:03.107258   10368 pod_ready.go:81] duration metric: took 400.1563ms for pod "kube-proxy-n648c" in "kube-system" namespace to be "Ready" ...
	I0612 13:21:03.107323   10368 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-269100" in "kube-system" namespace to be "Ready" ...
	I0612 13:21:03.297317   10368 request.go:629] Waited for 189.7485ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-269100
	I0612 13:21:03.297424   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-269100
	I0612 13:21:03.297424   10368 round_trippers.go:469] Request Headers:
	I0612 13:21:03.297424   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:21:03.297424   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:21:03.297696   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:21:03.297696   10368 round_trippers.go:577] Response Headers:
	I0612 13:21:03.297696   10368 round_trippers.go:580]     Audit-Id: 408ca1ca-8bde-4dc1-96ad-42cea3af8776
	I0612 13:21:03.297696   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:21:03.297696   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:21:03.297696   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:21:03.297696   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:21:03.297696   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:21:03 GMT
	I0612 13:21:03.301776   10368 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-269100","namespace":"kube-system","uid":"78bc6f0f-601f-4bca-9874-f57640a6545d","resourceVersion":"622","creationTimestamp":"2024-06-12T20:18:10Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a7496fffb9e28e7144da51b95c59b4e9","kubernetes.io/config.mirror":"a7496fffb9e28e7144da51b95c59b4e9","kubernetes.io/config.seen":"2024-06-12T20:18:10.244901797Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5459 chars]
	I0612 13:21:03.332227   10368 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:21:03.332227   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:21:03.343066   10368 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 13:21:03.343837   10368 kapi.go:59] client config for functional-269100: &rest.Config{Host:"https://172.23.195.181:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-269100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-269100\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x288e1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0612 13:21:03.343946   10368 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:21:03.343946   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:21:03.347114   10368 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 13:21:03.344591   10368 addons.go:234] Setting addon default-storageclass=true in "functional-269100"
	W0612 13:21:03.347195   10368 addons.go:243] addon default-storageclass should already be in state true
	I0612 13:21:03.349893   10368 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 13:21:03.349893   10368 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0612 13:21:03.349893   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-269100 ).state
	I0612 13:21:03.349893   10368 host.go:66] Checking if "functional-269100" exists ...
	I0612 13:21:03.351263   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-269100 ).state
	I0612 13:21:03.508468   10368 request.go:629] Waited for 205.3593ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:21:03.508594   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes/functional-269100
	I0612 13:21:03.508594   10368 round_trippers.go:469] Request Headers:
	I0612 13:21:03.508838   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:21:03.508838   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:21:03.514344   10368 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:21:03.514540   10368 round_trippers.go:577] Response Headers:
	I0612 13:21:03.514540   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:21:03.514540   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:21:03 GMT
	I0612 13:21:03.514540   10368 round_trippers.go:580]     Audit-Id: 553addef-ceba-4a75-a9e0-bffea139e79b
	I0612 13:21:03.514540   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:21:03.514540   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:21:03.514540   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:21:03.515308   10368 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-12T20:18:06Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0612 13:21:03.515830   10368 pod_ready.go:92] pod "kube-scheduler-functional-269100" in "kube-system" namespace has status "Ready":"True"
	I0612 13:21:03.515980   10368 pod_ready.go:81] duration metric: took 408.6553ms for pod "kube-scheduler-functional-269100" in "kube-system" namespace to be "Ready" ...
	I0612 13:21:03.516054   10368 pod_ready.go:38] duration metric: took 2.0515642s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 13:21:03.516054   10368 api_server.go:52] waiting for apiserver process to appear ...
	I0612 13:21:03.529603   10368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 13:21:03.558149   10368 command_runner.go:130] > 5855
	I0612 13:21:03.558262   10368 api_server.go:72] duration metric: took 2.437895s to wait for apiserver process to appear ...
	I0612 13:21:03.558451   10368 api_server.go:88] waiting for apiserver healthz status ...
	I0612 13:21:03.558451   10368 api_server.go:253] Checking apiserver healthz at https://172.23.195.181:8441/healthz ...
	I0612 13:21:03.568332   10368 api_server.go:279] https://172.23.195.181:8441/healthz returned 200:
	ok
	I0612 13:21:03.568425   10368 round_trippers.go:463] GET https://172.23.195.181:8441/version
	I0612 13:21:03.568425   10368 round_trippers.go:469] Request Headers:
	I0612 13:21:03.568425   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:21:03.568425   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:21:03.570643   10368 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 13:21:03.570784   10368 round_trippers.go:577] Response Headers:
	I0612 13:21:03.570784   10368 round_trippers.go:580]     Audit-Id: 806b91ab-3290-4a5c-bcc3-50ca09abbc54
	I0612 13:21:03.570784   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:21:03.570784   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:21:03.570784   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:21:03.570784   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:21:03.570784   10368 round_trippers.go:580]     Content-Length: 263
	I0612 13:21:03.570784   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:21:03 GMT
	I0612 13:21:03.570784   10368 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0612 13:21:03.570784   10368 api_server.go:141] control plane version: v1.30.1
	I0612 13:21:03.570784   10368 api_server.go:131] duration metric: took 12.3329ms to wait for apiserver health ...
	I0612 13:21:03.570784   10368 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 13:21:03.696900   10368 request.go:629] Waited for 125.9412ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods
	I0612 13:21:03.697082   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods
	I0612 13:21:03.697082   10368 round_trippers.go:469] Request Headers:
	I0612 13:21:03.697082   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:21:03.697082   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:21:03.697803   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:21:03.702890   10368 round_trippers.go:577] Response Headers:
	I0612 13:21:03.702890   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:21:03.702890   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:21:03.702890   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:21:03.702890   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:21:03 GMT
	I0612 13:21:03.702890   10368 round_trippers.go:580]     Audit-Id: e7256e6f-3c3e-4955-ae6f-acfc4b017866
	I0612 13:21:03.702890   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:21:03.703948   10368 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"622"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8b5dd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"3a86a91c-36de-41f5-b243-00743e29acba","resourceVersion":"611","creationTimestamp":"2024-06-12T20:18:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a982b302-3928-460e-824d-887ada3b8b98","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a982b302-3928-460e-824d-887ada3b8b98\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50674 chars]
	I0612 13:21:03.708330   10368 system_pods.go:59] 7 kube-system pods found
	I0612 13:21:03.708447   10368 system_pods.go:61] "coredns-7db6d8ff4d-8b5dd" [3a86a91c-36de-41f5-b243-00743e29acba] Running
	I0612 13:21:03.708447   10368 system_pods.go:61] "etcd-functional-269100" [e0ce37ea-d250-471f-91dc-c6d5c1dbc26a] Running
	I0612 13:21:03.708447   10368 system_pods.go:61] "kube-apiserver-functional-269100" [60fe25bc-0c96-4afe-9cb1-dfc324dc7ac3] Running
	I0612 13:21:03.708447   10368 system_pods.go:61] "kube-controller-manager-functional-269100" [e706c833-6890-462a-b7a3-240a9fd2470a] Running
	I0612 13:21:03.708587   10368 system_pods.go:61] "kube-proxy-n648c" [4f6f5e07-4ced-484d-a47c-1af2e55ce102] Running
	I0612 13:21:03.708587   10368 system_pods.go:61] "kube-scheduler-functional-269100" [78bc6f0f-601f-4bca-9874-f57640a6545d] Running
	I0612 13:21:03.708587   10368 system_pods.go:61] "storage-provisioner" [a5945727-bd26-4c6e-8afe-1ae05bcd4944] Running
	I0612 13:21:03.708648   10368 system_pods.go:74] duration metric: took 137.8634ms to wait for pod list to return data ...
	I0612 13:21:03.708648   10368 default_sa.go:34] waiting for default service account to be created ...
	I0612 13:21:03.896630   10368 request.go:629] Waited for 187.8544ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.195.181:8441/api/v1/namespaces/default/serviceaccounts
	I0612 13:21:03.896630   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/default/serviceaccounts
	I0612 13:21:03.896630   10368 round_trippers.go:469] Request Headers:
	I0612 13:21:03.896630   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:21:03.896858   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:21:03.897811   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:21:03.901182   10368 round_trippers.go:577] Response Headers:
	I0612 13:21:03.901287   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:21:03 GMT
	I0612 13:21:03.901349   10368 round_trippers.go:580]     Audit-Id: a5bee4cf-1db0-4634-ac1e-04cee4bea497
	I0612 13:21:03.901349   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:21:03.901349   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:21:03.901435   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:21:03.901469   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:21:03.901469   10368 round_trippers.go:580]     Content-Length: 261
	I0612 13:21:03.901562   10368 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"622"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"5f44b635-d7be-44f3-b508-f3fac08dd300","resourceVersion":"354","creationTimestamp":"2024-06-12T20:18:24Z"}}]}
	I0612 13:21:03.901851   10368 default_sa.go:45] found service account: "default"
	I0612 13:21:03.901851   10368 default_sa.go:55] duration metric: took 193.1468ms for default service account to be created ...
	I0612 13:21:03.902050   10368 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 13:21:04.105894   10368 request.go:629] Waited for 203.4103ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods
	I0612 13:21:04.106135   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/namespaces/kube-system/pods
	I0612 13:21:04.106135   10368 round_trippers.go:469] Request Headers:
	I0612 13:21:04.106135   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:21:04.106382   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:21:04.106727   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:21:04.111384   10368 round_trippers.go:577] Response Headers:
	I0612 13:21:04.111384   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:21:04.111384   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:21:04.111384   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:21:04.111384   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:21:04.111384   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:21:04 GMT
	I0612 13:21:04.111384   10368 round_trippers.go:580]     Audit-Id: 6c215e0e-9590-4059-afda-749aa5a75faa
	I0612 13:21:04.113667   10368 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"622"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8b5dd","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"3a86a91c-36de-41f5-b243-00743e29acba","resourceVersion":"611","creationTimestamp":"2024-06-12T20:18:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"a982b302-3928-460e-824d-887ada3b8b98","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T20:18:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a982b302-3928-460e-824d-887ada3b8b98\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50674 chars]
	I0612 13:21:04.117955   10368 system_pods.go:86] 7 kube-system pods found
	I0612 13:21:04.117955   10368 system_pods.go:89] "coredns-7db6d8ff4d-8b5dd" [3a86a91c-36de-41f5-b243-00743e29acba] Running
	I0612 13:21:04.117955   10368 system_pods.go:89] "etcd-functional-269100" [e0ce37ea-d250-471f-91dc-c6d5c1dbc26a] Running
	I0612 13:21:04.117955   10368 system_pods.go:89] "kube-apiserver-functional-269100" [60fe25bc-0c96-4afe-9cb1-dfc324dc7ac3] Running
	I0612 13:21:04.117955   10368 system_pods.go:89] "kube-controller-manager-functional-269100" [e706c833-6890-462a-b7a3-240a9fd2470a] Running
	I0612 13:21:04.117955   10368 system_pods.go:89] "kube-proxy-n648c" [4f6f5e07-4ced-484d-a47c-1af2e55ce102] Running
	I0612 13:21:04.118509   10368 system_pods.go:89] "kube-scheduler-functional-269100" [78bc6f0f-601f-4bca-9874-f57640a6545d] Running
	I0612 13:21:04.118509   10368 system_pods.go:89] "storage-provisioner" [a5945727-bd26-4c6e-8afe-1ae05bcd4944] Running
	I0612 13:21:04.118509   10368 system_pods.go:126] duration metric: took 216.4585ms to wait for k8s-apps to be running ...
	I0612 13:21:04.118667   10368 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 13:21:04.126436   10368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 13:21:04.158038   10368 system_svc.go:56] duration metric: took 39.2666ms WaitForService to wait for kubelet
	I0612 13:21:04.158038   10368 kubeadm.go:576] duration metric: took 3.0376689s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 13:21:04.158126   10368 node_conditions.go:102] verifying NodePressure condition ...
	I0612 13:21:04.306663   10368 request.go:629] Waited for 148.257ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.195.181:8441/api/v1/nodes
	I0612 13:21:04.306738   10368 round_trippers.go:463] GET https://172.23.195.181:8441/api/v1/nodes
	I0612 13:21:04.306881   10368 round_trippers.go:469] Request Headers:
	I0612 13:21:04.306935   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:21:04.306935   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:21:04.307472   10368 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 13:21:04.307472   10368 round_trippers.go:577] Response Headers:
	I0612 13:21:04.307472   10368 round_trippers.go:580]     Audit-Id: 4dabd4c8-1c4a-4f82-a0a4-4af4e23b9862
	I0612 13:21:04.307472   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:21:04.307472   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:21:04.307472   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:21:04.307472   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:21:04.307472   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:21:04 GMT
	I0612 13:21:04.311778   10368 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"622"},"items":[{"metadata":{"name":"functional-269100","uid":"c0614627-b0a3-4482-8a4e-97c0e03e49c1","resourceVersion":"545","creationTimestamp":"2024-06-12T20:18:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-269100","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"functional-269100","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T13_18_10_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4841 chars]
	I0612 13:21:04.312148   10368 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 13:21:04.312148   10368 node_conditions.go:123] node cpu capacity is 2
	I0612 13:21:04.312148   10368 node_conditions.go:105] duration metric: took 154.0212ms to run NodePressure ...
	I0612 13:21:04.312148   10368 start.go:240] waiting for startup goroutines ...
	I0612 13:21:05.599425   10368 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:21:05.612015   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:21:05.612015   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-269100 ).networkadapters[0]).ipaddresses[0]
	I0612 13:21:05.612502   10368 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:21:05.612502   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:21:05.612502   10368 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0612 13:21:05.612502   10368 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0612 13:21:05.612502   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-269100 ).state
	I0612 13:21:07.840919   10368 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:21:07.840919   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:21:07.841152   10368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-269100 ).networkadapters[0]).ipaddresses[0]
	I0612 13:21:08.244056   10368 main.go:141] libmachine: [stdout =====>] : 172.23.195.181
	
	I0612 13:21:08.244056   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:21:08.256499   10368 sshutil.go:53] new ssh client: &{IP:172.23.195.181 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-269100\id_rsa Username:docker}
	I0612 13:21:08.415936   10368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 13:21:09.263029   10368 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0612 13:21:09.263029   10368 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0612 13:21:09.263029   10368 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0612 13:21:09.263029   10368 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0612 13:21:09.263029   10368 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0612 13:21:09.263029   10368 command_runner.go:130] > pod/storage-provisioner configured
	I0612 13:21:10.454634   10368 main.go:141] libmachine: [stdout =====>] : 172.23.195.181
	
	I0612 13:21:10.454634   10368 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:21:10.466633   10368 sshutil.go:53] new ssh client: &{IP:172.23.195.181 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-269100\id_rsa Username:docker}
	I0612 13:21:10.596501   10368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0612 13:21:10.760634   10368 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0612 13:21:10.761000   10368 round_trippers.go:463] GET https://172.23.195.181:8441/apis/storage.k8s.io/v1/storageclasses
	I0612 13:21:10.761054   10368 round_trippers.go:469] Request Headers:
	I0612 13:21:10.761054   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:21:10.761054   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:21:10.769276   10368 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0612 13:21:10.769369   10368 round_trippers.go:577] Response Headers:
	I0612 13:21:10.769369   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:21:10.769369   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:21:10.769369   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:21:10.769369   10368 round_trippers.go:580]     Content-Length: 1273
	I0612 13:21:10.769369   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:21:10 GMT
	I0612 13:21:10.769369   10368 round_trippers.go:580]     Audit-Id: 7e4c09a2-7d17-4718-b839-e41bfcd806ff
	I0612 13:21:10.769369   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:21:10.769369   10368 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"629"},"items":[{"metadata":{"name":"standard","uid":"1c596c89-59e8-4ca2-be6f-4ae69862601d","resourceVersion":"434","creationTimestamp":"2024-06-12T20:18:34Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-12T20:18:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0612 13:21:10.769994   10368 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"1c596c89-59e8-4ca2-be6f-4ae69862601d","resourceVersion":"434","creationTimestamp":"2024-06-12T20:18:34Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-12T20:18:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0612 13:21:10.769994   10368 round_trippers.go:463] PUT https://172.23.195.181:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I0612 13:21:10.769994   10368 round_trippers.go:469] Request Headers:
	I0612 13:21:10.769994   10368 round_trippers.go:473]     Content-Type: application/json
	I0612 13:21:10.769994   10368 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:21:10.769994   10368 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:21:10.774757   10368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:21:10.774757   10368 round_trippers.go:577] Response Headers:
	I0612 13:21:10.774757   10368 round_trippers.go:580]     Audit-Id: 73d0e01b-0e60-41ca-842c-a17e00aa797d
	I0612 13:21:10.774757   10368 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 13:21:10.774757   10368 round_trippers.go:580]     Content-Type: application/json
	I0612 13:21:10.774757   10368 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76a0269f-49be-40c0-aa7b-2e1f0b3a5535
	I0612 13:21:10.774757   10368 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be44d2db-4673-40a9-a9d7-6c84092f1763
	I0612 13:21:10.774757   10368 round_trippers.go:580]     Content-Length: 1220
	I0612 13:21:10.774757   10368 round_trippers.go:580]     Date: Wed, 12 Jun 2024 20:21:10 GMT
	I0612 13:21:10.774757   10368 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"1c596c89-59e8-4ca2-be6f-4ae69862601d","resourceVersion":"434","creationTimestamp":"2024-06-12T20:18:34Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-12T20:18:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0612 13:21:10.781463   10368 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0612 13:21:10.783784   10368 addons.go:510] duration metric: took 9.6635219s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0612 13:21:10.783784   10368 start.go:245] waiting for cluster config update ...
	I0612 13:21:10.783784   10368 start.go:254] writing updated cluster config ...
	I0612 13:21:10.792772   10368 ssh_runner.go:195] Run: rm -f paused
	I0612 13:21:10.934015   10368 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0612 13:21:10.937824   10368 out.go:177] * Done! kubectl is now configured to use "functional-269100" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jun 12 20:20:48 functional-269100 dockerd[4257]: time="2024-06-12T20:20:48.049006440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:20:48 functional-269100 dockerd[4257]: time="2024-06-12T20:20:48.049120043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:20:48 functional-269100 dockerd[4257]: time="2024-06-12T20:20:48.077837077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 12 20:20:48 functional-269100 dockerd[4257]: time="2024-06-12T20:20:48.077922079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 12 20:20:48 functional-269100 dockerd[4257]: time="2024-06-12T20:20:48.077955480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:20:48 functional-269100 dockerd[4257]: time="2024-06-12T20:20:48.078079483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:20:48 functional-269100 dockerd[4257]: time="2024-06-12T20:20:48.108261754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 12 20:20:48 functional-269100 dockerd[4257]: time="2024-06-12T20:20:48.109631989Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 12 20:20:48 functional-269100 dockerd[4257]: time="2024-06-12T20:20:48.110384008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:20:48 functional-269100 dockerd[4257]: time="2024-06-12T20:20:48.112876072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:20:48 functional-269100 cri-dockerd[4479]: time="2024-06-12T20:20:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6d70b0eb66bc02b661f18e2fd7f76be6c8bdd0c560bb4c1f98136c3cb6bf8eaa/resolv.conf as [nameserver 172.23.192.1]"
	Jun 12 20:20:48 functional-269100 cri-dockerd[4479]: time="2024-06-12T20:20:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b25d75cfb20e54afcc52d46677662b8e23a2fcf620657f7ae6dadbdeb36bc15d/resolv.conf as [nameserver 172.23.192.1]"
	Jun 12 20:20:48 functional-269100 cri-dockerd[4479]: time="2024-06-12T20:20:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8ed243958d65d7d7c8e176f5b8b951ff4af0775e2ee95364d636dd1cc457048c/resolv.conf as [nameserver 172.23.192.1]"
	Jun 12 20:20:48 functional-269100 dockerd[4257]: time="2024-06-12T20:20:48.648222489Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 12 20:20:48 functional-269100 dockerd[4257]: time="2024-06-12T20:20:48.648495197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 12 20:20:48 functional-269100 dockerd[4257]: time="2024-06-12T20:20:48.648713802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:20:48 functional-269100 dockerd[4257]: time="2024-06-12T20:20:48.649011410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:20:48 functional-269100 dockerd[4257]: time="2024-06-12T20:20:48.713167080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 12 20:20:48 functional-269100 dockerd[4257]: time="2024-06-12T20:20:48.713403586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 12 20:20:48 functional-269100 dockerd[4257]: time="2024-06-12T20:20:48.713591491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:20:48 functional-269100 dockerd[4257]: time="2024-06-12T20:20:48.714214307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:20:48 functional-269100 dockerd[4257]: time="2024-06-12T20:20:48.946251247Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 12 20:20:48 functional-269100 dockerd[4257]: time="2024-06-12T20:20:48.946850563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 12 20:20:48 functional-269100 dockerd[4257]: time="2024-06-12T20:20:48.947034468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:20:48 functional-269100 dockerd[4257]: time="2024-06-12T20:20:48.948148397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	101e658f77ee0       cbb01a7bd410d       2 minutes ago       Running             coredns                   1                   8ed243958d65d       coredns-7db6d8ff4d-8b5dd
	234cb67d87c61       747097150317f       2 minutes ago       Running             kube-proxy                2                   b25d75cfb20e5       kube-proxy-n648c
	fc7c02696ac68       6e38f40d628db       2 minutes ago       Running             storage-provisioner       2                   6d70b0eb66bc0       storage-provisioner
	82a1cf67195f9       91be940803172       2 minutes ago       Running             kube-apiserver            2                   1d1bf53f9486d       kube-apiserver-functional-269100
	43c2869bb2ca1       a52dc94f0a912       2 minutes ago       Running             kube-scheduler            2                   ff015baf7b276       kube-scheduler-functional-269100
	b7355d15aefaa       25a1387cdab82       2 minutes ago       Running             kube-controller-manager   1                   744f2e71f7367       kube-controller-manager-functional-269100
	08474cd194e8e       3861cfcd7c04c       2 minutes ago       Running             etcd                      2                   db02cb5416eba       etcd-functional-269100
	69ab259e005da       747097150317f       2 minutes ago       Created             kube-proxy                1                   f60b6e2c9ca0d       kube-proxy-n648c
	c4cb04828d2fe       91be940803172       2 minutes ago       Created             kube-apiserver            1                   bf936158b697c       kube-apiserver-functional-269100
	0ae84e775a765       a52dc94f0a912       2 minutes ago       Created             kube-scheduler            1                   d6f38411bd8ea       kube-scheduler-functional-269100
	5e57619115f00       3861cfcd7c04c       2 minutes ago       Exited              etcd                      1                   27e29d0ad2582       etcd-functional-269100
	58335f6ca6721       6e38f40d628db       2 minutes ago       Exited              storage-provisioner       1                   34d988411ef12       storage-provisioner
	3ab3d96f3951f       cbb01a7bd410d       4 minutes ago       Exited              coredns                   0                   63d9b50b1e602       coredns-7db6d8ff4d-8b5dd
	8e3e126deeab4       25a1387cdab82       4 minutes ago       Exited              kube-controller-manager   0                   3f34d95cace3c       kube-controller-manager-functional-269100
	
	
	==> coredns [101e658f77ee] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9f7dc1bade6b5769fb289c890c4bc60268e74645c2ad6eb7d326d3f775fd92cb51f1ac39274894772e6760c31275de0003978af82f0f289ef8d45827e8140e48
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:53268 - 54415 "HINFO IN 9143717860786064426.599873215217959106. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.127191503s
	
	
	==> coredns [3ab3d96f3951] <==
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[427092202]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (12-Jun-2024 20:18:26.687) (total time: 30001ms):
	Trace[427092202]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (20:18:56.689)
	Trace[427092202]: [30.001546187s] [30.001546187s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1287503895]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (12-Jun-2024 20:18:26.688) (total time: 30001ms):
	Trace[1287503895]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (20:18:56.689)
	Trace[1287503895]: [30.001372784s] [30.001372784s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1474207557]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (12-Jun-2024 20:18:26.688) (total time: 30002ms):
	Trace[1474207557]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (20:18:56.689)
	Trace[1474207557]: [30.002373885s] [30.002373885s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 9f7dc1bade6b5769fb289c890c4bc60268e74645c2ad6eb7d326d3f775fd92cb51f1ac39274894772e6760c31275de0003978af82f0f289ef8d45827e8140e48
	[INFO] Reloading complete
	[INFO] 127.0.0.1:57987 - 27391 "HINFO IN 8373734686230459062.450140439347366977. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.058027776s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-269100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-269100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97
	                    minikube.k8s.io/name=functional-269100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_12T13_18_10_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 20:18:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-269100
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 20:22:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 20:22:50 +0000   Wed, 12 Jun 2024 20:18:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 20:22:50 +0000   Wed, 12 Jun 2024 20:18:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 20:22:50 +0000   Wed, 12 Jun 2024 20:18:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 20:22:50 +0000   Wed, 12 Jun 2024 20:18:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.195.181
	  Hostname:    functional-269100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 bdf6e92615164fcb914f11e04effc592
	  System UUID:                5196ed42-1cd0-2749-9a2c-2fcaf0a15274
	  Boot ID:                    a51dc4d9-4e88-4b19-ad10-742916beb646
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-8b5dd                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m28s
	  kube-system                 etcd-functional-269100                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m42s
	  kube-system                 kube-apiserver-functional-269100             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 kube-controller-manager-functional-269100    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 kube-proxy-n648c                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 kube-scheduler-functional-269100             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m3s                   kube-proxy       
	  Normal  Starting                 4m26s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  4m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     4m49s (x7 over 4m50s)  kubelet          Node functional-269100 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m49s (x8 over 4m50s)  kubelet          Node functional-269100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m49s (x8 over 4m50s)  kubelet          Node functional-269100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m42s                  kubelet          Node functional-269100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m42s                  kubelet          Node functional-269100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m42s                  kubelet          Node functional-269100 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 4m42s                  kubelet          Starting kubelet.
	  Normal  NodeReady                4m40s                  kubelet          Node functional-269100 status is now: NodeReady
	  Normal  RegisteredNode           4m28s                  node-controller  Node functional-269100 event: Registered Node functional-269100 in Controller
	  Normal  Starting                 2m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node functional-269100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node functional-269100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x7 over 2m10s)  kubelet          Node functional-269100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           112s                   node-controller  Node functional-269100 event: Registered Node functional-269100 in Controller
	
	
	==> dmesg <==
	[  +5.168627] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.668463] systemd-fstab-generator[1520]: Ignoring "noauto" option for root device
	[Jun12 20:18] systemd-fstab-generator[1718]: Ignoring "noauto" option for root device
	[  +0.097623] kauditd_printk_skb: 51 callbacks suppressed
	[  +7.520753] systemd-fstab-generator[2123]: Ignoring "noauto" option for root device
	[  +0.124865] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.933382] systemd-fstab-generator[2370]: Ignoring "noauto" option for root device
	[  +0.179573] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.945743] kauditd_printk_skb: 88 callbacks suppressed
	[Jun12 20:19] kauditd_printk_skb: 10 callbacks suppressed
	[Jun12 20:20] systemd-fstab-generator[3773]: Ignoring "noauto" option for root device
	[  +0.703980] systemd-fstab-generator[3809]: Ignoring "noauto" option for root device
	[  +0.286525] systemd-fstab-generator[3835]: Ignoring "noauto" option for root device
	[  +0.322911] systemd-fstab-generator[3849]: Ignoring "noauto" option for root device
	[  +5.371360] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.855850] systemd-fstab-generator[4427]: Ignoring "noauto" option for root device
	[  +0.220027] systemd-fstab-generator[4439]: Ignoring "noauto" option for root device
	[  +0.189149] systemd-fstab-generator[4451]: Ignoring "noauto" option for root device
	[  +0.298287] systemd-fstab-generator[4466]: Ignoring "noauto" option for root device
	[  +0.861889] systemd-fstab-generator[4625]: Ignoring "noauto" option for root device
	[  +3.877684] systemd-fstab-generator[5364]: Ignoring "noauto" option for root device
	[  +0.102325] kauditd_printk_skb: 189 callbacks suppressed
	[  +6.055171] kauditd_printk_skb: 52 callbacks suppressed
	[Jun12 20:21] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.870014] systemd-fstab-generator[6386]: Ignoring "noauto" option for root device
	
	
	==> etcd [08474cd194e8] <==
	{"level":"info","ts":"2024-06-12T20:20:43.977952Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-12T20:20:43.979067Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-12T20:20:43.978952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8ce53b15d24c6b2 switched to configuration voters=(17928359187845531314)"}
	{"level":"info","ts":"2024-06-12T20:20:43.982771Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"df499a7f67a5c2ab","local-member-id":"f8ce53b15d24c6b2","added-peer-id":"f8ce53b15d24c6b2","added-peer-peer-urls":["https://172.23.195.181:2380"]}
	{"level":"info","ts":"2024-06-12T20:20:43.983138Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"df499a7f67a5c2ab","local-member-id":"f8ce53b15d24c6b2","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-12T20:20:43.98335Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-12T20:20:44.043203Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-12T20:20:44.04342Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f8ce53b15d24c6b2","initial-advertise-peer-urls":["https://172.23.195.181:2380"],"listen-peer-urls":["https://172.23.195.181:2380"],"advertise-client-urls":["https://172.23.195.181:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.23.195.181:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-12T20:20:44.043442Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-12T20:20:44.043531Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.23.195.181:2380"}
	{"level":"info","ts":"2024-06-12T20:20:44.04354Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.23.195.181:2380"}
	{"level":"info","ts":"2024-06-12T20:20:45.59162Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8ce53b15d24c6b2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-12T20:20:45.591976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8ce53b15d24c6b2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-12T20:20:45.59242Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8ce53b15d24c6b2 received MsgPreVoteResp from f8ce53b15d24c6b2 at term 2"}
	{"level":"info","ts":"2024-06-12T20:20:45.592633Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8ce53b15d24c6b2 became candidate at term 3"}
	{"level":"info","ts":"2024-06-12T20:20:45.592916Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8ce53b15d24c6b2 received MsgVoteResp from f8ce53b15d24c6b2 at term 3"}
	{"level":"info","ts":"2024-06-12T20:20:45.59321Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8ce53b15d24c6b2 became leader at term 3"}
	{"level":"info","ts":"2024-06-12T20:20:45.593399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f8ce53b15d24c6b2 elected leader f8ce53b15d24c6b2 at term 3"}
	{"level":"info","ts":"2024-06-12T20:20:45.597532Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f8ce53b15d24c6b2","local-member-attributes":"{Name:functional-269100 ClientURLs:[https://172.23.195.181:2379]}","request-path":"/0/members/f8ce53b15d24c6b2/attributes","cluster-id":"df499a7f67a5c2ab","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-12T20:20:45.597762Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-12T20:20:45.597832Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-12T20:20:45.598601Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-12T20:20:45.598813Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-12T20:20:45.600815Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.23.195.181:2379"}
	{"level":"info","ts":"2024-06-12T20:20:45.60254Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [5e57619115f0] <==
	{"level":"warn","ts":"2024-06-12T20:20:40.098079Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-06-12T20:20:40.098493Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.23.195.181:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.23.195.181:2380","--initial-cluster=functional-269100=https://172.23.195.181:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.23.195.181:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.23.195.181:2380","--name=functional-269100","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=1000
0","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-06-12T20:20:40.098836Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-06-12T20:20:40.099048Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-06-12T20:20:40.099471Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.23.195.181:2380"]}
	{"level":"info","ts":"2024-06-12T20:20:40.099778Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-12T20:20:40.102898Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.23.195.181:2379"]}
	{"level":"info","ts":"2024-06-12T20:20:40.103086Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-269100","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.23.195.181:2380"],"listen-peer-urls":["https://172.23.195.181:2380"],"advertise-client-urls":["https://172.23.195.181:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.23.195.181:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initi
al-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-06-12T20:20:40.129694Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"26.287919ms"}
	{"level":"info","ts":"2024-06-12T20:20:40.162863Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-06-12T20:20:40.17553Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"df499a7f67a5c2ab","local-member-id":"f8ce53b15d24c6b2","commit-index":574}
	{"level":"info","ts":"2024-06-12T20:20:40.179737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8ce53b15d24c6b2 switched to configuration voters=()"}
	{"level":"info","ts":"2024-06-12T20:20:40.179772Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8ce53b15d24c6b2 became follower at term 2"}
	{"level":"info","ts":"2024-06-12T20:20:40.179784Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft f8ce53b15d24c6b2 [peers: [], term: 2, commit: 574, applied: 0, lastindex: 574, lastterm: 2]"}
	{"level":"warn","ts":"2024-06-12T20:20:40.186574Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	
	
	==> kernel <==
	 20:22:52 up 6 min,  0 users,  load average: 0.28, 0.47, 0.24
	Linux functional-269100 5.10.207 #1 SMP Tue Jun 11 00:16:05 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [82a1cf67195f] <==
	I0612 20:20:47.182394       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0612 20:20:47.186170       1 aggregator.go:165] initial CRD sync complete...
	I0612 20:20:47.186791       1 autoregister_controller.go:141] Starting autoregister controller
	I0612 20:20:47.187030       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0612 20:20:47.187168       1 cache.go:39] Caches are synced for autoregister controller
	I0612 20:20:47.187389       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0612 20:20:47.187651       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0612 20:20:47.188921       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0612 20:20:47.188969       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0612 20:20:47.189170       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0612 20:20:47.198075       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0612 20:20:47.198112       1 policy_source.go:224] refreshing policies
	I0612 20:20:47.199059       1 shared_informer.go:320] Caches are synced for configmaps
	I0612 20:20:47.199721       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0612 20:20:47.209839       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0612 20:20:47.228210       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0612 20:20:47.243987       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0612 20:20:48.067596       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0612 20:20:49.274697       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0612 20:20:49.308084       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0612 20:20:49.380150       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0612 20:20:49.437431       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0612 20:20:49.449070       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0612 20:21:00.352995       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0612 20:21:00.407844       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [c4cb04828d2f] <==
	
	
	==> kube-controller-manager [8e3e126deeab] <==
	I0612 20:18:24.528705       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0612 20:18:24.529397       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0612 20:18:24.530879       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0612 20:18:24.545171       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 20:18:24.566236       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 20:18:24.608982       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0612 20:18:24.615840       1 shared_informer.go:320] Caches are synced for persistent volume
	I0612 20:18:24.616864       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="200.223638ms"
	I0612 20:18:24.641129       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="20.709335ms"
	I0612 20:18:24.645216       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="35.5µs"
	I0612 20:18:24.666906       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="101.5µs"
	I0612 20:18:25.079388       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 20:18:25.110983       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 20:18:25.111013       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0612 20:18:25.969722       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="145.960526ms"
	I0612 20:18:26.032886       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="63.105212ms"
	I0612 20:18:26.034153       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="99.8µs"
	I0612 20:18:27.099924       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="49µs"
	I0612 20:18:27.147509       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="53.9µs"
	I0612 20:18:37.049018       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="89.1µs"
	I0612 20:18:37.271625       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="65.8µs"
	I0612 20:18:37.285444       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="169.9µs"
	I0612 20:18:37.298798       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.7µs"
	I0612 20:19:04.931931       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="17.984792ms"
	I0612 20:19:04.932769       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="702.8µs"
	
	
	==> kube-controller-manager [b7355d15aefa] <==
	I0612 20:21:00.377196       1 shared_informer.go:320] Caches are synced for job
	I0612 20:21:00.382316       1 shared_informer.go:320] Caches are synced for taint
	I0612 20:21:00.382815       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0612 20:21:00.383005       1 shared_informer.go:320] Caches are synced for deployment
	I0612 20:21:00.383927       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-269100"
	I0612 20:21:00.384203       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0612 20:21:00.386700       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0612 20:21:00.394671       1 shared_informer.go:320] Caches are synced for node
	I0612 20:21:00.394758       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0612 20:21:00.394819       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0612 20:21:00.394873       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0612 20:21:00.394908       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0612 20:21:00.396984       1 shared_informer.go:320] Caches are synced for PVC protection
	I0612 20:21:00.400101       1 shared_informer.go:320] Caches are synced for expand
	I0612 20:21:00.404394       1 shared_informer.go:320] Caches are synced for HPA
	I0612 20:21:00.418138       1 shared_informer.go:320] Caches are synced for cronjob
	I0612 20:21:00.457164       1 shared_informer.go:320] Caches are synced for persistent volume
	I0612 20:21:00.477485       1 shared_informer.go:320] Caches are synced for daemon sets
	I0612 20:21:00.482830       1 shared_informer.go:320] Caches are synced for stateful set
	I0612 20:21:00.588426       1 shared_informer.go:320] Caches are synced for attach detach
	I0612 20:21:00.589722       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 20:21:00.600613       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 20:21:01.037103       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 20:21:01.081699       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 20:21:01.081922       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [234cb67d87c6] <==
	I0612 20:20:48.954375       1 server_linux.go:69] "Using iptables proxy"
	I0612 20:20:48.968080       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.195.181"]
	I0612 20:20:49.066202       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 20:20:49.066283       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 20:20:49.066304       1 server_linux.go:165] "Using iptables Proxier"
	I0612 20:20:49.072533       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 20:20:49.072772       1 server.go:872] "Version info" version="v1.30.1"
	I0612 20:20:49.072791       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 20:20:49.076363       1 config.go:192] "Starting service config controller"
	I0612 20:20:49.076373       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 20:20:49.076405       1 config.go:101] "Starting endpoint slice config controller"
	I0612 20:20:49.076410       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 20:20:49.076865       1 config.go:319] "Starting node config controller"
	I0612 20:20:49.076874       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 20:20:49.178928       1 shared_informer.go:320] Caches are synced for node config
	I0612 20:20:49.178988       1 shared_informer.go:320] Caches are synced for service config
	I0612 20:20:49.179018       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [69ab259e005d] <==
	
	
	==> kube-scheduler [0ae84e775a76] <==
	
	
	==> kube-scheduler [43c2869bb2ca] <==
	I0612 20:20:45.499617       1 serving.go:380] Generated self-signed cert in-memory
	W0612 20:20:47.112545       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0612 20:20:47.112727       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0612 20:20:47.112741       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0612 20:20:47.112750       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0612 20:20:47.179252       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0612 20:20:47.179293       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 20:20:47.184354       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0612 20:20:47.186478       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0612 20:20:47.186515       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0612 20:20:47.186545       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 20:20:47.287657       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 12 20:20:47 functional-269100 kubelet[5371]: I0612 20:20:47.263397    5371 topology_manager.go:215] "Topology Admit Handler" podUID="a5945727-bd26-4c6e-8afe-1ae05bcd4944" podNamespace="kube-system" podName="storage-provisioner"
	Jun 12 20:20:47 functional-269100 kubelet[5371]: E0612 20:20:47.280847    5371 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-functional-269100\" already exists" pod="kube-system/kube-apiserver-functional-269100"
	Jun 12 20:20:47 functional-269100 kubelet[5371]: I0612 20:20:47.292021    5371 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jun 12 20:20:47 functional-269100 kubelet[5371]: I0612 20:20:47.304674    5371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f6f5e07-4ced-484d-a47c-1af2e55ce102-xtables-lock\") pod \"kube-proxy-n648c\" (UID: \"4f6f5e07-4ced-484d-a47c-1af2e55ce102\") " pod="kube-system/kube-proxy-n648c"
	Jun 12 20:20:47 functional-269100 kubelet[5371]: I0612 20:20:47.304731    5371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f6f5e07-4ced-484d-a47c-1af2e55ce102-lib-modules\") pod \"kube-proxy-n648c\" (UID: \"4f6f5e07-4ced-484d-a47c-1af2e55ce102\") " pod="kube-system/kube-proxy-n648c"
	Jun 12 20:20:47 functional-269100 kubelet[5371]: I0612 20:20:47.304759    5371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a5945727-bd26-4c6e-8afe-1ae05bcd4944-tmp\") pod \"storage-provisioner\" (UID: \"a5945727-bd26-4c6e-8afe-1ae05bcd4944\") " pod="kube-system/storage-provisioner"
	Jun 12 20:20:47 functional-269100 kubelet[5371]: I0612 20:20:47.338221    5371 kubelet_node_status.go:112] "Node was previously registered" node="functional-269100"
	Jun 12 20:20:47 functional-269100 kubelet[5371]: I0612 20:20:47.338599    5371 kubelet_node_status.go:76] "Successfully registered node" node="functional-269100"
	Jun 12 20:20:47 functional-269100 kubelet[5371]: I0612 20:20:47.340070    5371 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 12 20:20:47 functional-269100 kubelet[5371]: I0612 20:20:47.341159    5371 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 12 20:20:48 functional-269100 kubelet[5371]: I0612 20:20:48.339114    5371 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b25d75cfb20e54afcc52d46677662b8e23a2fcf620657f7ae6dadbdeb36bc15d"
	Jun 12 20:20:48 functional-269100 kubelet[5371]: I0612 20:20:48.389469    5371 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d70b0eb66bc02b661f18e2fd7f76be6c8bdd0c560bb4c1f98136c3cb6bf8eaa"
	Jun 12 20:20:48 functional-269100 kubelet[5371]: I0612 20:20:48.520875    5371 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ed243958d65d7d7c8e176f5b8b951ff4af0775e2ee95364d636dd1cc457048c"
	Jun 12 20:20:50 functional-269100 kubelet[5371]: I0612 20:20:50.602069    5371 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jun 12 20:20:57 functional-269100 kubelet[5371]: I0612 20:20:57.059178    5371 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jun 12 20:21:42 functional-269100 kubelet[5371]: E0612 20:21:42.427935    5371 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 20:21:42 functional-269100 kubelet[5371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 20:21:42 functional-269100 kubelet[5371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 20:21:42 functional-269100 kubelet[5371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 20:21:42 functional-269100 kubelet[5371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 20:22:42 functional-269100 kubelet[5371]: E0612 20:22:42.426401    5371 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 20:22:42 functional-269100 kubelet[5371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 20:22:42 functional-269100 kubelet[5371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 20:22:42 functional-269100 kubelet[5371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 20:22:42 functional-269100 kubelet[5371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [58335f6ca672] <==
	I0612 20:20:40.077190       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0612 20:20:40.129082       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [fc7c02696ac6] <==
	I0612 20:20:48.829384       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0612 20:20:48.847196       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0612 20:20:48.847406       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0612 20:21:06.270897       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0612 20:21:06.271613       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"73052297-e35f-4671-8da7-80da50a16913", APIVersion:"v1", ResourceVersion:"623", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-269100_88985075-142e-4c07-885b-2f2e494871bf became leader
	I0612 20:21:06.271926       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-269100_88985075-142e-4c07-885b-2f2e494871bf!
	I0612 20:21:06.372391       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-269100_88985075-142e-4c07-885b-2f2e494871bf!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0612 13:22:44.958953    6796 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-269100 -n functional-269100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-269100 -n functional-269100: (11.5572152s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-269100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (32.78s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-269100 config unset cpus" to be -""- but got *"W0612 13:25:51.102223    7468 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-269100 config get cpus: exit status 14 (194.383ms)

                                                
                                                
** stderr ** 
	W0612 13:25:51.323258    2648 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-269100 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0612 13:25:51.323258    2648 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-269100 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0612 13:25:51.522695    5304 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-269100 config get cpus" to be -""- but got *"W0612 13:25:51.834343    9596 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-269100 config unset cpus" to be -""- but got *"W0612 13:25:52.076945    9740 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-269100 config get cpus: exit status 14 (192.8584ms)

                                                
                                                
** stderr ** 
	W0612 13:25:52.296296    1856 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-269100 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0612 13:25:52.296296    1856 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-269100 service --namespace=default --https --url hello-node: exit status 1 (15.023521s)

                                                
                                                
** stderr ** 
	W0612 13:27:42.389991     756 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1507: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-269100 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-269100 service hello-node --url --format={{.IP}}: exit status 1 (15.0085226s)

                                                
                                                
** stderr ** 
	W0612 13:27:57.459989    7100 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-269100 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1544: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-269100 service hello-node --url: exit status 1 (15.0475313s)

                                                
                                                
** stderr ** 
	W0612 13:28:12.448047   10972 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1557: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-269100 service hello-node --url": exit status 1
functional_test.go:1561: found endpoint for hello-node: 
functional_test.go:1569: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (69.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-957600 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-957600 -- exec busybox-fc5497c4f-q7zbt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-957600 -- exec busybox-fc5497c4f-q7zbt -- sh -c "ping -c 1 172.23.192.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-957600 -- exec busybox-fc5497c4f-q7zbt -- sh -c "ping -c 1 172.23.192.1": exit status 1 (10.4499411s)

                                                
                                                
-- stdout --
	PING 172.23.192.1 (172.23.192.1): 56 data bytes
	
	--- 172.23.192.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0612 13:47:08.363172   10592 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.23.192.1) from pod (busybox-fc5497c4f-q7zbt): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-957600 -- exec busybox-fc5497c4f-qhrx6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-957600 -- exec busybox-fc5497c4f-qhrx6 -- sh -c "ping -c 1 172.23.192.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-957600 -- exec busybox-fc5497c4f-qhrx6 -- sh -c "ping -c 1 172.23.192.1": exit status 1 (10.4185548s)

                                                
                                                
-- stdout --
	PING 172.23.192.1 (172.23.192.1): 56 data bytes
	
	--- 172.23.192.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0612 13:47:19.232766    8176 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.23.192.1) from pod (busybox-fc5497c4f-qhrx6): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-957600 -- exec busybox-fc5497c4f-sfrgv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-957600 -- exec busybox-fc5497c4f-sfrgv -- sh -c "ping -c 1 172.23.192.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-957600 -- exec busybox-fc5497c4f-sfrgv -- sh -c "ping -c 1 172.23.192.1": exit status 1 (10.4406234s)

                                                
                                                
-- stdout --
	PING 172.23.192.1 (172.23.192.1): 56 data bytes
	
	--- 172.23.192.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0612 13:47:30.087901   13608 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.23.192.1) from pod (busybox-fc5497c4f-sfrgv): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-957600 -n ha-957600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-957600 -n ha-957600: (12.4965524s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 logs -n 25: (9.1166963s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| image   | functional-269100 image build -t     | functional-269100 | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:29 PDT | 12 Jun 24 13:29 PDT |
	|         | localhost/my-image:functional-269100 |                   |                   |         |                     |                     |
	|         | testdata\build --alsologtostderr     |                   |                   |         |                     |                     |
	| image   | functional-269100                    | functional-269100 | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:29 PDT | 12 Jun 24 13:29 PDT |
	|         | image ls --format table              |                   |                   |         |                     |                     |
	|         | --alsologtostderr                    |                   |                   |         |                     |                     |
	| image   | functional-269100 image ls           | functional-269100 | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:29 PDT | 12 Jun 24 13:29 PDT |
	| delete  | -p functional-269100                 | functional-269100 | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:33 PDT | 12 Jun 24 13:34 PDT |
	| start   | -p ha-957600 --wait=true             | ha-957600         | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:34 PDT | 12 Jun 24 13:46 PDT |
	|         | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|         | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-957600 -- apply -f             | ha-957600         | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:46 PDT | 12 Jun 24 13:46 PDT |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl | -p ha-957600 -- rollout status       | ha-957600         | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:46 PDT | 12 Jun 24 13:46 PDT |
	|         | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-957600 -- get pods -o          | ha-957600         | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:47 PDT | 12 Jun 24 13:47 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-957600 -- get pods -o          | ha-957600         | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:47 PDT | 12 Jun 24 13:47 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-957600 -- exec                 | ha-957600         | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:47 PDT | 12 Jun 24 13:47 PDT |
	|         | busybox-fc5497c4f-q7zbt --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-957600 -- exec                 | ha-957600         | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:47 PDT | 12 Jun 24 13:47 PDT |
	|         | busybox-fc5497c4f-qhrx6 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-957600 -- exec                 | ha-957600         | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:47 PDT | 12 Jun 24 13:47 PDT |
	|         | busybox-fc5497c4f-sfrgv --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-957600 -- exec                 | ha-957600         | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:47 PDT | 12 Jun 24 13:47 PDT |
	|         | busybox-fc5497c4f-q7zbt --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-957600 -- exec                 | ha-957600         | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:47 PDT | 12 Jun 24 13:47 PDT |
	|         | busybox-fc5497c4f-qhrx6 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-957600 -- exec                 | ha-957600         | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:47 PDT | 12 Jun 24 13:47 PDT |
	|         | busybox-fc5497c4f-sfrgv --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-957600 -- exec                 | ha-957600         | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:47 PDT | 12 Jun 24 13:47 PDT |
	|         | busybox-fc5497c4f-q7zbt -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-957600 -- exec                 | ha-957600         | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:47 PDT | 12 Jun 24 13:47 PDT |
	|         | busybox-fc5497c4f-qhrx6 -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-957600 -- exec                 | ha-957600         | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:47 PDT | 12 Jun 24 13:47 PDT |
	|         | busybox-fc5497c4f-sfrgv -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-957600 -- get pods -o          | ha-957600         | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:47 PDT | 12 Jun 24 13:47 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-957600 -- exec                 | ha-957600         | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:47 PDT | 12 Jun 24 13:47 PDT |
	|         | busybox-fc5497c4f-q7zbt              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-957600 -- exec                 | ha-957600         | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:47 PDT |                     |
	|         | busybox-fc5497c4f-q7zbt -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.23.192.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-957600 -- exec                 | ha-957600         | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:47 PDT | 12 Jun 24 13:47 PDT |
	|         | busybox-fc5497c4f-qhrx6              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-957600 -- exec                 | ha-957600         | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:47 PDT |                     |
	|         | busybox-fc5497c4f-qhrx6 -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.23.192.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-957600 -- exec                 | ha-957600         | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:47 PDT | 12 Jun 24 13:47 PDT |
	|         | busybox-fc5497c4f-sfrgv              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-957600 -- exec                 | ha-957600         | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:47 PDT |                     |
	|         | busybox-fc5497c4f-sfrgv -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.23.192.1            |                   |                   |         |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/12 13:34:56
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0612 13:34:56.542237    7444 out.go:291] Setting OutFile to fd 1216 ...
	I0612 13:34:56.542237    7444 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 13:34:56.542237    7444 out.go:304] Setting ErrFile to fd 1552...
	I0612 13:34:56.542237    7444 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 13:34:56.569708    7444 out.go:298] Setting JSON to false
	I0612 13:34:56.572530    7444 start.go:129] hostinfo: {"hostname":"minikube1","uptime":22849,"bootTime":1718201647,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4529 Build 19045.4529","kernelVersion":"10.0.19045.4529 Build 19045.4529","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0612 13:34:56.572530    7444 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0612 13:34:56.579683    7444 out.go:177] * [ha-957600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	I0612 13:34:56.584327    7444 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 13:34:56.584134    7444 notify.go:220] Checking for updates...
	I0612 13:34:56.586832    7444 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 13:34:56.589473    7444 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0612 13:34:56.592013    7444 out.go:177]   - MINIKUBE_LOCATION=19044
	I0612 13:34:56.594373    7444 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 13:34:56.597436    7444 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 13:35:01.833778    7444 out.go:177] * Using the hyperv driver based on user configuration
	I0612 13:35:01.840588    7444 start.go:297] selected driver: hyperv
	I0612 13:35:01.840588    7444 start.go:901] validating driver "hyperv" against <nil>
	I0612 13:35:01.840588    7444 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 13:35:01.888640    7444 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0612 13:35:01.890173    7444 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 13:35:01.890173    7444 cni.go:84] Creating CNI manager for ""
	I0612 13:35:01.890173    7444 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0612 13:35:01.890173    7444 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0612 13:35:01.890724    7444 start.go:340] cluster config:
	{Name:ha-957600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-957600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 13:35:01.890786    7444 iso.go:125] acquiring lock: {Name:mk052eb609047b80b971cea5054470b0706b5b41 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 13:35:01.899687    7444 out.go:177] * Starting "ha-957600" primary control-plane node in "ha-957600" cluster
	I0612 13:35:01.903251    7444 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0612 13:35:01.903251    7444 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0612 13:35:01.903879    7444 cache.go:56] Caching tarball of preloaded images
	I0612 13:35:01.904060    7444 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0612 13:35:01.904369    7444 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0612 13:35:01.904485    7444 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\config.json ...
	I0612 13:35:01.905231    7444 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\config.json: {Name:mk8a5bf4016ab0a0e27781815d7a6f396d68f116 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:35:01.906447    7444 start.go:360] acquireMachinesLock for ha-957600: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 13:35:01.906447    7444 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-957600"
	I0612 13:35:01.906800    7444 start.go:93] Provisioning new machine with config: &{Name:ha-957600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-957600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0612 13:35:01.907050    7444 start.go:125] createHost starting for "" (driver="hyperv")
	I0612 13:35:01.913602    7444 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0612 13:35:01.913602    7444 start.go:159] libmachine.API.Create for "ha-957600" (driver="hyperv")
	I0612 13:35:01.913602    7444 client.go:168] LocalClient.Create starting
	I0612 13:35:01.914405    7444 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0612 13:35:01.914405    7444 main.go:141] libmachine: Decoding PEM data...
	I0612 13:35:01.914405    7444 main.go:141] libmachine: Parsing certificate...
	I0612 13:35:01.914405    7444 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0612 13:35:01.914405    7444 main.go:141] libmachine: Decoding PEM data...
	I0612 13:35:01.914405    7444 main.go:141] libmachine: Parsing certificate...
	I0612 13:35:01.914405    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0612 13:35:03.915690    7444 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0612 13:35:03.915690    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:03.915690    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0612 13:35:05.648973    7444 main.go:141] libmachine: [stdout =====>] : False
	
	I0612 13:35:05.648973    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:05.649870    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0612 13:35:07.126091    7444 main.go:141] libmachine: [stdout =====>] : True
	
	I0612 13:35:07.126441    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:07.126513    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0612 13:35:10.963077    7444 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0612 13:35:10.963077    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:10.965478    7444 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso...
	I0612 13:35:11.518935    7444 main.go:141] libmachine: Creating SSH key...
	I0612 13:35:11.923838    7444 main.go:141] libmachine: Creating VM...
	I0612 13:35:11.923838    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0612 13:35:14.792720    7444 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0612 13:35:14.792720    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:14.793525    7444 main.go:141] libmachine: Using switch "Default Switch"
	I0612 13:35:14.793525    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0612 13:35:16.551983    7444 main.go:141] libmachine: [stdout =====>] : True
	
	I0612 13:35:16.566172    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:16.566172    7444 main.go:141] libmachine: Creating VHD
	I0612 13:35:16.566172    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0612 13:35:20.323494    7444 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : CEE6B17B-45D3-4FF0-9DF7-237DC435A391
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0612 13:35:20.324319    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:20.324319    7444 main.go:141] libmachine: Writing magic tar header
	I0612 13:35:20.324440    7444 main.go:141] libmachine: Writing SSH key tar header
	I0612 13:35:20.334186    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0612 13:35:23.482850    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:35:23.482850    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:23.482850    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\disk.vhd' -SizeBytes 20000MB
	I0612 13:35:26.012868    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:35:26.013047    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:26.013103    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-957600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0612 13:35:29.684966    7444 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-957600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0612 13:35:29.684966    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:29.684966    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-957600 -DynamicMemoryEnabled $false
	I0612 13:35:31.928694    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:35:31.928812    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:31.928812    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-957600 -Count 2
	I0612 13:35:34.099615    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:35:34.099707    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:34.099707    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-957600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\boot2docker.iso'
	I0612 13:35:36.612780    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:35:36.612780    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:36.613122    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-957600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\disk.vhd'
	I0612 13:35:39.346262    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:35:39.347301    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:39.347301    7444 main.go:141] libmachine: Starting VM...
	I0612 13:35:39.347301    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-957600
	I0612 13:35:42.540047    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:35:42.540047    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:42.540047    7444 main.go:141] libmachine: Waiting for host to start...
	I0612 13:35:42.540047    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:35:44.802918    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:35:44.803633    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:44.803704    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:35:47.339590    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:35:47.339590    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:48.341681    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:35:50.539461    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:35:50.539556    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:50.539615    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:35:53.112618    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:35:53.112690    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:54.119513    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:35:56.317936    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:35:56.317936    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:56.318114    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:35:58.795148    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:35:58.795541    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:59.808387    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:36:02.035483    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:36:02.035548    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:02.035609    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:36:04.563510    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:36:04.564633    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:05.578819    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:36:07.822978    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:36:07.822978    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:07.823386    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:36:10.466760    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:36:10.467001    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:10.467280    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:36:12.606935    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:36:12.607534    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:12.607534    7444 machine.go:94] provisionDockerMachine start ...
	I0612 13:36:12.607534    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:36:14.774693    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:36:14.774693    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:14.775762    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:36:17.374045    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:36:17.374199    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:17.380004    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:36:17.391204    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.203.104 22 <nil> <nil>}
	I0612 13:36:17.391204    7444 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 13:36:17.531450    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 13:36:17.531450    7444 buildroot.go:166] provisioning hostname "ha-957600"
	I0612 13:36:17.531450    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:36:19.652524    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:36:19.652524    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:19.652918    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:36:22.170758    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:36:22.170758    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:22.176149    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:36:22.176965    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.203.104 22 <nil> <nil>}
	I0612 13:36:22.176965    7444 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-957600 && echo "ha-957600" | sudo tee /etc/hostname
	I0612 13:36:22.346621    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-957600
	
	I0612 13:36:22.346755    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:36:24.510894    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:36:24.511261    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:24.511401    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:36:27.039816    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:36:27.039816    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:27.046542    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:36:27.047339    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.203.104 22 <nil> <nil>}
	I0612 13:36:27.047339    7444 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-957600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-957600/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-957600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 13:36:27.203007    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 13:36:27.203007    7444 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0612 13:36:27.203007    7444 buildroot.go:174] setting up certificates
	I0612 13:36:27.203007    7444 provision.go:84] configureAuth start
	I0612 13:36:27.203007    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:36:29.368374    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:36:29.368374    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:29.368793    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:36:31.899036    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:36:31.899240    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:31.899357    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:36:33.997322    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:36:33.997322    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:33.998178    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:36:36.497718    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:36:36.497718    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:36.497946    7444 provision.go:143] copyHostCerts
	I0612 13:36:36.498071    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0612 13:36:36.498530    7444 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0612 13:36:36.498632    7444 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0612 13:36:36.499151    7444 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0612 13:36:36.500446    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0612 13:36:36.500621    7444 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0612 13:36:36.500621    7444 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0612 13:36:36.500621    7444 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0612 13:36:36.502242    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0612 13:36:36.502457    7444 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0612 13:36:36.502457    7444 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0612 13:36:36.502457    7444 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0612 13:36:36.503742    7444 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-957600 san=[127.0.0.1 172.23.203.104 ha-957600 localhost minikube]
	I0612 13:36:36.625953    7444 provision.go:177] copyRemoteCerts
	I0612 13:36:36.635736    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 13:36:36.636735    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:36:38.750317    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:36:38.750689    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:38.750689    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:36:41.242544    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:36:41.242658    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:41.242658    7444 sshutil.go:53] new ssh client: &{IP:172.23.203.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\id_rsa Username:docker}
	I0612 13:36:41.355617    7444 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7188672s)
	I0612 13:36:41.355617    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0612 13:36:41.355617    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0612 13:36:41.399652    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0612 13:36:41.400097    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 13:36:41.447183    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0612 13:36:41.447883    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0612 13:36:41.497281    7444 provision.go:87] duration metric: took 14.2941595s to configureAuth
	I0612 13:36:41.497281    7444 buildroot.go:189] setting minikube options for container-runtime
	I0612 13:36:41.497862    7444 config.go:182] Loaded profile config "ha-957600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 13:36:41.497902    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:36:43.582317    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:36:43.582563    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:43.582652    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:36:46.106834    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:36:46.107642    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:46.112950    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:36:46.113545    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.203.104 22 <nil> <nil>}
	I0612 13:36:46.113545    7444 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0612 13:36:46.256968    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0612 13:36:46.257071    7444 buildroot.go:70] root file system type: tmpfs
	I0612 13:36:46.257243    7444 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0612 13:36:46.257243    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:36:48.369564    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:36:48.369564    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:48.369564    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:36:50.882639    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:36:50.882705    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:50.887251    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:36:50.888223    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.203.104 22 <nil> <nil>}
	I0612 13:36:50.888223    7444 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0612 13:36:51.047753    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0612 13:36:51.047845    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:36:53.133977    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:36:53.134324    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:53.134324    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:36:55.681306    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:36:55.681306    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:55.687497    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:36:55.687497    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.203.104 22 <nil> <nil>}
	I0612 13:36:55.688042    7444 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0612 13:36:57.808076    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0612 13:36:57.808137    7444 machine.go:97] duration metric: took 45.2004637s to provisionDockerMachine
	I0612 13:36:57.808137    7444 client.go:171] duration metric: took 1m55.8941812s to LocalClient.Create
	I0612 13:36:57.808193    7444 start.go:167] duration metric: took 1m55.894238s to libmachine.API.Create "ha-957600"
	I0612 13:36:57.808321    7444 start.go:293] postStartSetup for "ha-957600" (driver="hyperv")
	I0612 13:36:57.808321    7444 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 13:36:57.819780    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 13:36:57.820889    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:36:59.935272    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:36:59.935854    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:59.935920    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:37:02.430030    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:37:02.431098    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:37:02.431098    7444 sshutil.go:53] new ssh client: &{IP:172.23.203.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\id_rsa Username:docker}
	I0612 13:37:02.544154    7444 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7243588s)
	I0612 13:37:02.556323    7444 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 13:37:02.563987    7444 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 13:37:02.564142    7444 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0612 13:37:02.564634    7444 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0612 13:37:02.565586    7444 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> 12802.pem in /etc/ssl/certs
	I0612 13:37:02.565586    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> /etc/ssl/certs/12802.pem
	I0612 13:37:02.577058    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 13:37:02.595360    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem --> /etc/ssl/certs/12802.pem (1708 bytes)
	I0612 13:37:02.641985    7444 start.go:296] duration metric: took 4.8336491s for postStartSetup
	I0612 13:37:02.645019    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:37:04.784175    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:37:04.784546    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:37:04.784897    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:37:07.259969    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:37:07.259969    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:37:07.259969    7444 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\config.json ...
	I0612 13:37:07.263496    7444 start.go:128] duration metric: took 2m5.3560633s to createHost
	I0612 13:37:07.263589    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:37:09.413529    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:37:09.413639    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:37:09.413639    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:37:11.937910    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:37:11.937910    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:37:11.944884    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:37:11.944884    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.203.104 22 <nil> <nil>}
	I0612 13:37:11.944884    7444 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 13:37:12.088661    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718224632.091842522
	
	I0612 13:37:12.088661    7444 fix.go:216] guest clock: 1718224632.091842522
	I0612 13:37:12.088661    7444 fix.go:229] Guest: 2024-06-12 13:37:12.091842522 -0700 PDT Remote: 2024-06-12 13:37:07.2635896 -0700 PDT m=+130.806402601 (delta=4.828252922s)
	I0612 13:37:12.088661    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:37:14.220293    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:37:14.220293    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:37:14.220293    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:37:16.729661    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:37:16.729661    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:37:16.735559    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:37:16.735559    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.203.104 22 <nil> <nil>}
	I0612 13:37:16.735559    7444 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1718224632
	I0612 13:37:16.877631    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jun 12 20:37:12 UTC 2024
	
	I0612 13:37:16.877631    7444 fix.go:236] clock set: Wed Jun 12 20:37:12 UTC 2024
	 (err=<nil>)
	I0612 13:37:16.877631    7444 start.go:83] releasing machines lock for "ha-957600", held for 2m14.9704791s
	I0612 13:37:16.877631    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:37:19.036041    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:37:19.036575    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:37:19.036575    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:37:21.534178    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:37:21.534178    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:37:21.539333    7444 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 13:37:21.539431    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:37:21.551466    7444 ssh_runner.go:195] Run: cat /version.json
	I0612 13:37:21.551632    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:37:23.743902    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:37:23.744088    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:37:23.744176    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:37:23.752775    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:37:23.752775    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:37:23.752775    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:37:26.360477    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:37:26.360477    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:37:26.360748    7444 sshutil.go:53] new ssh client: &{IP:172.23.203.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\id_rsa Username:docker}
	I0612 13:37:26.396519    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:37:26.396519    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:37:26.397271    7444 sshutil.go:53] new ssh client: &{IP:172.23.203.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\id_rsa Username:docker}
	I0612 13:37:26.515102    7444 ssh_runner.go:235] Completed: cat /version.json: (4.9624684s)
	I0612 13:37:26.515102    7444 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9757171s)
	I0612 13:37:26.530566    7444 ssh_runner.go:195] Run: systemctl --version
	I0612 13:37:26.552035    7444 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 13:37:26.560153    7444 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 13:37:26.572971    7444 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 13:37:26.604819    7444 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 13:37:26.604917    7444 start.go:494] detecting cgroup driver to use...
	I0612 13:37:26.604917    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 13:37:26.658600    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0612 13:37:26.696089    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0612 13:37:26.718226    7444 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0612 13:37:26.733936    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0612 13:37:26.771089    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0612 13:37:26.802851    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0612 13:37:26.834244    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0612 13:37:26.870154    7444 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 13:37:26.906061    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0612 13:37:26.940386    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0612 13:37:26.973956    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0612 13:37:27.009706    7444 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 13:37:27.047191    7444 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 13:37:27.081976    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:37:27.293221    7444 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0612 13:37:27.325466    7444 start.go:494] detecting cgroup driver to use...
	I0612 13:37:27.339386    7444 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0612 13:37:27.379251    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 13:37:27.415507    7444 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 13:37:27.461514    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 13:37:27.500264    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0612 13:37:27.538787    7444 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0612 13:37:27.610224    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0612 13:37:27.636612    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 13:37:27.688637    7444 ssh_runner.go:195] Run: which cri-dockerd
	I0612 13:37:27.707187    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0612 13:37:27.727346    7444 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0612 13:37:27.771607    7444 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0612 13:37:27.991901    7444 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0612 13:37:28.192210    7444 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0612 13:37:28.192516    7444 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0612 13:37:28.236500    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:37:28.443355    7444 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0612 13:37:30.980177    7444 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5368145s)
	I0612 13:37:30.992534    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0612 13:37:31.029135    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0612 13:37:31.062643    7444 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0612 13:37:31.261297    7444 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0612 13:37:31.477180    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:37:31.672377    7444 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0612 13:37:31.714713    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0612 13:37:31.751314    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:37:31.933908    7444 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0612 13:37:32.042389    7444 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0612 13:37:32.059089    7444 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0612 13:37:32.068335    7444 start.go:562] Will wait 60s for crictl version
	I0612 13:37:32.080328    7444 ssh_runner.go:195] Run: which crictl
	I0612 13:37:32.106785    7444 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 13:37:32.163497    7444 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0612 13:37:32.173814    7444 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0612 13:37:32.217995    7444 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0612 13:37:32.253189    7444 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0612 13:37:32.253360    7444 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0612 13:37:32.258119    7444 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0612 13:37:32.258119    7444 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0612 13:37:32.258119    7444 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0612 13:37:32.258119    7444 ip.go:207] Found interface: {Index:16 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:56:a0:18 Flags:up|broadcast|multicast|running}
	I0612 13:37:32.262304    7444 ip.go:210] interface addr: fe80::52c5:dd8:dd1e:a400/64
	I0612 13:37:32.262350    7444 ip.go:210] interface addr: 172.23.192.1/20
	I0612 13:37:32.275181    7444 ssh_runner.go:195] Run: grep 172.23.192.1	host.minikube.internal$ /etc/hosts
	I0612 13:37:32.282339    7444 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 13:37:32.319049    7444 kubeadm.go:877] updating cluster {Name:ha-957600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1
ClusterName:ha-957600 Namespace:default APIServerHAVIP:172.23.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.203.104 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 13:37:32.319049    7444 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0612 13:37:32.331943    7444 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0612 13:37:32.353638    7444 docker.go:685] Got preloaded images: 
	I0612 13:37:32.353638    7444 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0612 13:37:32.367429    7444 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0612 13:37:32.397888    7444 ssh_runner.go:195] Run: which lz4
	I0612 13:37:32.404497    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0612 13:37:32.417918    7444 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0612 13:37:32.423770    7444 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0612 13:37:32.424763    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0612 13:37:34.287633    7444 docker.go:649] duration metric: took 1.882155s to copy over tarball
	I0612 13:37:34.299774    7444 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0612 13:37:42.818860    7444 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.5188951s)
	I0612 13:37:42.818860    7444 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0612 13:37:42.888444    7444 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0612 13:37:42.909759    7444 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0612 13:37:42.959732    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:37:43.189785    7444 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0612 13:37:46.150583    7444 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.9607884s)
	I0612 13:37:46.160124    7444 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0612 13:37:46.191664    7444 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0612 13:37:46.191740    7444 cache_images.go:84] Images are preloaded, skipping loading
	I0612 13:37:46.191740    7444 kubeadm.go:928] updating node { 172.23.203.104 8443 v1.30.1 docker true true} ...
	I0612 13:37:46.192064    7444 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-957600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.203.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-957600 Namespace:default APIServerHAVIP:172.23.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 13:37:46.202510    7444 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0612 13:37:46.239519    7444 cni.go:84] Creating CNI manager for ""
	I0612 13:37:46.239613    7444 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0612 13:37:46.239613    7444 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 13:37:46.239725    7444 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.23.203.104 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-957600 NodeName:ha-957600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.23.203.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.23.203.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 13:37:46.239992    7444 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.23.203.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-957600"
	  kubeletExtraArgs:
	    node-ip: 172.23.203.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.23.203.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 13:37:46.240081    7444 kube-vip.go:115] generating kube-vip config ...
	I0612 13:37:46.252078    7444 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0612 13:37:46.278209    7444 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0612 13:37:46.278209    7444 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.23.207.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0612 13:37:46.289375    7444 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 13:37:46.314560    7444 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 13:37:46.330080    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0612 13:37:46.352034    7444 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0612 13:37:46.383560    7444 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 13:37:46.416500    7444 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0612 13:37:46.448054    7444 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0612 13:37:46.490410    7444 ssh_runner.go:195] Run: grep 172.23.207.254	control-plane.minikube.internal$ /etc/hosts
	I0612 13:37:46.495364    7444 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.207.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 13:37:46.528393    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:37:46.729154    7444 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 13:37:46.759505    7444 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600 for IP: 172.23.203.104
	I0612 13:37:46.759550    7444 certs.go:194] generating shared ca certs ...
	I0612 13:37:46.759633    7444 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:37:46.760428    7444 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0612 13:37:46.760913    7444 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0612 13:37:46.761207    7444 certs.go:256] generating profile certs ...
	I0612 13:37:46.761932    7444 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\client.key
	I0612 13:37:46.761932    7444 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\client.crt with IP's: []
	I0612 13:37:47.362697    7444 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\client.crt ...
	I0612 13:37:47.362697    7444 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\client.crt: {Name:mkd4d63a91baf2e65e053f36cc6b43511c7c6e0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:37:47.364600    7444 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\client.key ...
	I0612 13:37:47.364600    7444 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\client.key: {Name:mkeefa0efc3694c7552816886ab96188c0feac77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:37:47.368020    7444 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key.2cdbb0d2
	I0612 13:37:47.368020    7444 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt.2cdbb0d2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.23.203.104 172.23.207.254]
	I0612 13:37:47.614086    7444 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt.2cdbb0d2 ...
	I0612 13:37:47.614086    7444 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt.2cdbb0d2: {Name:mk0a37a7a02e561559da747eb9992ef106e73eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:37:47.615338    7444 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key.2cdbb0d2 ...
	I0612 13:37:47.616386    7444 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key.2cdbb0d2: {Name:mk6cb6877f38d518fea7ca584fab3b00ed6037ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:37:47.616657    7444 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt.2cdbb0d2 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt
	I0612 13:37:47.628757    7444 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key.2cdbb0d2 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key
	I0612 13:37:47.629772    7444 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.key
	I0612 13:37:47.630955    7444 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.crt with IP's: []
	I0612 13:37:47.738742    7444 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.crt ...
	I0612 13:37:47.738742    7444 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.crt: {Name:mk8571c0058e2ae080ac64e930a9dddcf6a91373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:37:47.739749    7444 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.key ...
	I0612 13:37:47.739749    7444 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.key: {Name:mk57b524a6182d5adbbee38d20828d8cb4c5c621 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:37:47.740740    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0612 13:37:47.741416    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0612 13:37:47.741617    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0612 13:37:47.741825    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0612 13:37:47.741958    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0612 13:37:47.742105    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0612 13:37:47.742255    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0612 13:37:47.751846    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0612 13:37:47.753841    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem (1338 bytes)
	W0612 13:37:47.753841    7444 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280_empty.pem, impossibly tiny 0 bytes
	I0612 13:37:47.753841    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0612 13:37:47.754853    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0612 13:37:47.754853    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0612 13:37:47.754853    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0612 13:37:47.755838    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem (1708 bytes)
	I0612 13:37:47.755838    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> /usr/share/ca-certificates/12802.pem
	I0612 13:37:47.755838    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0612 13:37:47.755838    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem -> /usr/share/ca-certificates/1280.pem
	I0612 13:37:47.756859    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 13:37:47.802817    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 13:37:47.842833    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 13:37:47.893372    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0612 13:37:47.942414    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0612 13:37:47.991186    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0612 13:37:48.041399    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 13:37:48.091411    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 13:37:48.142182    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem --> /usr/share/ca-certificates/12802.pem (1708 bytes)
	I0612 13:37:48.189745    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 13:37:48.249101    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem --> /usr/share/ca-certificates/1280.pem (1338 bytes)
	I0612 13:37:48.294060    7444 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 13:37:48.334672    7444 ssh_runner.go:195] Run: openssl version
	I0612 13:37:48.353387    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1280.pem && ln -fs /usr/share/ca-certificates/1280.pem /etc/ssl/certs/1280.pem"
	I0612 13:37:48.385542    7444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1280.pem
	I0612 13:37:48.394014    7444 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:15 /usr/share/ca-certificates/1280.pem
	I0612 13:37:48.406311    7444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1280.pem
	I0612 13:37:48.430041    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1280.pem /etc/ssl/certs/51391683.0"
	I0612 13:37:48.472481    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12802.pem && ln -fs /usr/share/ca-certificates/12802.pem /etc/ssl/certs/12802.pem"
	I0612 13:37:48.505747    7444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12802.pem
	I0612 13:37:48.512366    7444 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:15 /usr/share/ca-certificates/12802.pem
	I0612 13:37:48.522348    7444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12802.pem
	I0612 13:37:48.542948    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12802.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 13:37:48.578690    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 13:37:48.608485    7444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 13:37:48.616009    7444 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:00 /usr/share/ca-certificates/minikubeCA.pem
	I0612 13:37:48.628250    7444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 13:37:48.652127    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 13:37:48.686731    7444 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 13:37:48.693974    7444 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0612 13:37:48.694491    7444 kubeadm.go:391] StartCluster: {Name:ha-957600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clu
sterName:ha-957600 Namespace:default APIServerHAVIP:172.23.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.203.104 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 13:37:48.702819    7444 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0612 13:37:48.743519    7444 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0612 13:37:48.774371    7444 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 13:37:48.805671    7444 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 13:37:48.823488    7444 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 13:37:48.823488    7444 kubeadm.go:156] found existing configuration files:
	
	I0612 13:37:48.837851    7444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 13:37:48.854010    7444 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 13:37:48.865309    7444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 13:37:48.894657    7444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 13:37:48.911259    7444 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 13:37:48.925520    7444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 13:37:48.956374    7444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 13:37:48.973918    7444 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 13:37:48.985443    7444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 13:37:49.014074    7444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 13:37:49.031290    7444 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 13:37:49.042373    7444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 13:37:49.062944    7444 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 13:37:49.500936    7444 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 13:38:05.056892    7444 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0612 13:38:05.057019    7444 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 13:38:05.057386    7444 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 13:38:05.057688    7444 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 13:38:05.057902    7444 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 13:38:05.057902    7444 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 13:38:05.060687    7444 out.go:204]   - Generating certificates and keys ...
	I0612 13:38:05.061111    7444 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 13:38:05.061209    7444 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 13:38:05.061209    7444 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0612 13:38:05.061209    7444 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0612 13:38:05.061209    7444 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0612 13:38:05.061209    7444 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0612 13:38:05.061793    7444 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0612 13:38:05.062069    7444 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-957600 localhost] and IPs [172.23.203.104 127.0.0.1 ::1]
	I0612 13:38:05.062155    7444 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0612 13:38:05.062244    7444 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-957600 localhost] and IPs [172.23.203.104 127.0.0.1 ::1]
	I0612 13:38:05.062244    7444 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0612 13:38:05.062786    7444 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0612 13:38:05.062976    7444 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0612 13:38:05.063057    7444 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 13:38:05.063221    7444 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 13:38:05.063381    7444 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0612 13:38:05.063479    7444 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 13:38:05.063703    7444 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 13:38:05.063864    7444 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 13:38:05.063895    7444 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 13:38:05.063895    7444 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 13:38:05.066920    7444 out.go:204]   - Booting up control plane ...
	I0612 13:38:05.066920    7444 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 13:38:05.067444    7444 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 13:38:05.067444    7444 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 13:38:05.067736    7444 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 13:38:05.068265    7444 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 13:38:05.068317    7444 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 13:38:05.068674    7444 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0612 13:38:05.068674    7444 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0612 13:38:05.068674    7444 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002410575s
	I0612 13:38:05.069235    7444 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0612 13:38:05.069235    7444 kubeadm.go:309] [api-check] The API server is healthy after 8.92993329s
	I0612 13:38:05.069235    7444 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0612 13:38:05.069235    7444 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0612 13:38:05.069826    7444 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0612 13:38:05.071806    7444 kubeadm.go:309] [mark-control-plane] Marking the node ha-957600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0612 13:38:05.071806    7444 kubeadm.go:309] [bootstrap-token] Using token: 6td0sr.fr4ba9t8fayocxit
	I0612 13:38:05.075385    7444 out.go:204]   - Configuring RBAC rules ...
	I0612 13:38:05.075548    7444 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0612 13:38:05.075548    7444 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0612 13:38:05.075548    7444 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0612 13:38:05.075548    7444 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0612 13:38:05.076380    7444 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0612 13:38:05.076380    7444 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0612 13:38:05.076380    7444 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0612 13:38:05.076380    7444 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0612 13:38:05.076380    7444 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0612 13:38:05.076380    7444 kubeadm.go:309] 
	I0612 13:38:05.077384    7444 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0612 13:38:05.077384    7444 kubeadm.go:309] 
	I0612 13:38:05.077384    7444 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0612 13:38:05.077384    7444 kubeadm.go:309] 
	I0612 13:38:05.077384    7444 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0612 13:38:05.077384    7444 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0612 13:38:05.077384    7444 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0612 13:38:05.077384    7444 kubeadm.go:309] 
	I0612 13:38:05.077384    7444 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0612 13:38:05.077384    7444 kubeadm.go:309] 
	I0612 13:38:05.078378    7444 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0612 13:38:05.078378    7444 kubeadm.go:309] 
	I0612 13:38:05.078378    7444 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0612 13:38:05.078378    7444 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0612 13:38:05.078378    7444 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0612 13:38:05.078378    7444 kubeadm.go:309] 
	I0612 13:38:05.078378    7444 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0612 13:38:05.078378    7444 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0612 13:38:05.078378    7444 kubeadm.go:309] 
	I0612 13:38:05.079382    7444 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 6td0sr.fr4ba9t8fayocxit \
	I0612 13:38:05.079382    7444 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:10c04e0412ada9d72a46398cbb6ecb6de5efcad2d747fb615b7e984406c55dc5 \
	I0612 13:38:05.079382    7444 kubeadm.go:309] 	--control-plane 
	I0612 13:38:05.079382    7444 kubeadm.go:309] 
	I0612 13:38:05.079382    7444 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0612 13:38:05.079382    7444 kubeadm.go:309] 
	I0612 13:38:05.079382    7444 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 6td0sr.fr4ba9t8fayocxit \
	I0612 13:38:05.080381    7444 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:10c04e0412ada9d72a46398cbb6ecb6de5efcad2d747fb615b7e984406c55dc5 
	I0612 13:38:05.080381    7444 cni.go:84] Creating CNI manager for ""
	I0612 13:38:05.080381    7444 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0612 13:38:05.082083    7444 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0612 13:38:05.098184    7444 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0612 13:38:05.107200    7444 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0612 13:38:05.107200    7444 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0612 13:38:05.154645    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0612 13:38:05.734387    7444 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0612 13:38:05.749475    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:05.753032    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-957600 minikube.k8s.io/updated_at=2024_06_12T13_38_05_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97 minikube.k8s.io/name=ha-957600 minikube.k8s.io/primary=true
	I0612 13:38:05.770548    7444 ops.go:34] apiserver oom_adj: -16
	I0612 13:38:05.995827    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:06.498890    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:07.001767    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:07.502706    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:08.007651    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:08.512858    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:09.000782    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:09.501782    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:10.002749    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:10.504591    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:11.008799    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:11.496997    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:12.009568    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:12.499765    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:13.002627    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:13.505702    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:14.007806    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:14.508788    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:14.998380    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:15.499001    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:16.001721    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:16.505607    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:16.646029    7444 kubeadm.go:1107] duration metric: took 10.9115159s to wait for elevateKubeSystemPrivileges
	W0612 13:38:16.646188    7444 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0612 13:38:16.646188    7444 kubeadm.go:393] duration metric: took 27.9516118s to StartCluster
	I0612 13:38:16.646244    7444 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:38:16.646464    7444 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 13:38:16.648111    7444 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:38:16.649635    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0612 13:38:16.649820    7444 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.23.203.104 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0612 13:38:16.649865    7444 start.go:240] waiting for startup goroutines ...
	I0612 13:38:16.649972    7444 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0612 13:38:16.650135    7444 addons.go:69] Setting storage-provisioner=true in profile "ha-957600"
	I0612 13:38:16.650212    7444 addons.go:234] Setting addon storage-provisioner=true in "ha-957600"
	I0612 13:38:16.650212    7444 config.go:182] Loaded profile config "ha-957600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 13:38:16.650335    7444 host.go:66] Checking if "ha-957600" exists ...
	I0612 13:38:16.650212    7444 addons.go:69] Setting default-storageclass=true in profile "ha-957600"
	I0612 13:38:16.650464    7444 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-957600"
	I0612 13:38:16.651492    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:38:16.651492    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:38:16.852840    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.23.192.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0612 13:38:17.202169    7444 start.go:946] {"host.minikube.internal": 172.23.192.1} host record injected into CoreDNS's ConfigMap
	I0612 13:38:18.926279    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:38:18.926279    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:18.926279    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:38:18.926279    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:18.930896    7444 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 13:38:18.928577    7444 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 13:38:18.934510    7444 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 13:38:18.934510    7444 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0612 13:38:18.934510    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:38:18.935318    7444 kapi.go:59] client config for ha-957600: &rest.Config{Host:"https://172.23.207.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-957600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-957600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x288e1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0612 13:38:18.936044    7444 cert_rotation.go:137] Starting client certificate rotation controller
	I0612 13:38:18.936946    7444 addons.go:234] Setting addon default-storageclass=true in "ha-957600"
	I0612 13:38:18.936946    7444 host.go:66] Checking if "ha-957600" exists ...
	I0612 13:38:18.938136    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:38:21.265178    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:38:21.265178    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:21.265178    7444 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0612 13:38:21.265178    7444 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0612 13:38:21.265178    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:38:21.421446    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:38:21.421446    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:21.421446    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:38:23.530843    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:38:23.530843    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:23.531044    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:38:24.101883    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:38:24.101883    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:24.102214    7444 sshutil.go:53] new ssh client: &{IP:172.23.203.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\id_rsa Username:docker}
	I0612 13:38:24.262017    7444 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 13:38:26.190338    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:38:26.190338    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:26.191152    7444 sshutil.go:53] new ssh client: &{IP:172.23.203.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\id_rsa Username:docker}
	I0612 13:38:26.327756    7444 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0612 13:38:26.498949    7444 round_trippers.go:463] GET https://172.23.207.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0612 13:38:26.499013    7444 round_trippers.go:469] Request Headers:
	I0612 13:38:26.499013    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:38:26.499013    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:38:26.511158    7444 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0612 13:38:26.513156    7444 round_trippers.go:463] PUT https://172.23.207.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0612 13:38:26.513249    7444 round_trippers.go:469] Request Headers:
	I0612 13:38:26.513249    7444 round_trippers.go:473]     Content-Type: application/json
	I0612 13:38:26.513249    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:38:26.513249    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:38:26.520324    7444 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 13:38:26.524055    7444 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0612 13:38:26.526381    7444 addons.go:510] duration metric: took 9.8764058s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0612 13:38:26.526448    7444 start.go:245] waiting for cluster config update ...
	I0612 13:38:26.526448    7444 start.go:254] writing updated cluster config ...
	I0612 13:38:26.531505    7444 out.go:177] 
	I0612 13:38:26.540073    7444 config.go:182] Loaded profile config "ha-957600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 13:38:26.540073    7444 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\config.json ...
	I0612 13:38:26.545767    7444 out.go:177] * Starting "ha-957600-m02" control-plane node in "ha-957600" cluster
	I0612 13:38:26.548438    7444 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0612 13:38:26.548438    7444 cache.go:56] Caching tarball of preloaded images
	I0612 13:38:26.549107    7444 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0612 13:38:26.549435    7444 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0612 13:38:26.549435    7444 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\config.json ...
	I0612 13:38:26.553331    7444 start.go:360] acquireMachinesLock for ha-957600-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 13:38:26.554255    7444 start.go:364] duration metric: took 923.8µs to acquireMachinesLock for "ha-957600-m02"
	I0612 13:38:26.554313    7444 start.go:93] Provisioning new machine with config: &{Name:ha-957600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-957600 Namespace:default APIServerHAVIP:172.23.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.203.104 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0612 13:38:26.554313    7444 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0612 13:38:26.556402    7444 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0612 13:38:26.556402    7444 start.go:159] libmachine.API.Create for "ha-957600" (driver="hyperv")
	I0612 13:38:26.557318    7444 client.go:168] LocalClient.Create starting
	I0612 13:38:26.557429    7444 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0612 13:38:26.557941    7444 main.go:141] libmachine: Decoding PEM data...
	I0612 13:38:26.557941    7444 main.go:141] libmachine: Parsing certificate...
	I0612 13:38:26.557941    7444 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0612 13:38:26.558658    7444 main.go:141] libmachine: Decoding PEM data...
	I0612 13:38:26.558658    7444 main.go:141] libmachine: Parsing certificate...
	I0612 13:38:26.558658    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0612 13:38:28.534318    7444 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0612 13:38:28.534318    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:28.534318    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0612 13:38:30.293730    7444 main.go:141] libmachine: [stdout =====>] : False
	
	I0612 13:38:30.293730    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:30.294018    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0612 13:38:31.769159    7444 main.go:141] libmachine: [stdout =====>] : True
	
	I0612 13:38:31.769159    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:31.769442    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0612 13:38:35.444465    7444 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0612 13:38:35.444465    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:35.447599    7444 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso...
	I0612 13:38:35.935904    7444 main.go:141] libmachine: Creating SSH key...
	I0612 13:38:36.302100    7444 main.go:141] libmachine: Creating VM...
	I0612 13:38:36.302678    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0612 13:38:39.249913    7444 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0612 13:38:39.251043    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:39.251178    7444 main.go:141] libmachine: Using switch "Default Switch"
	I0612 13:38:39.251178    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0612 13:38:41.006758    7444 main.go:141] libmachine: [stdout =====>] : True
	
	I0612 13:38:41.006758    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:41.006758    7444 main.go:141] libmachine: Creating VHD
	I0612 13:38:41.007373    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0612 13:38:44.914666    7444 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 214CD393-3C4C-4E43-B696-A1BFA3CB3E3D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0612 13:38:44.914666    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:44.914666    7444 main.go:141] libmachine: Writing magic tar header
	I0612 13:38:44.914926    7444 main.go:141] libmachine: Writing SSH key tar header
	I0612 13:38:44.926522    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0612 13:38:48.115717    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:38:48.115717    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:48.115717    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m02\disk.vhd' -SizeBytes 20000MB
	I0612 13:38:50.668958    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:38:50.669053    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:50.669149    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-957600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0612 13:38:54.339010    7444 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-957600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0612 13:38:54.339328    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:54.339328    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-957600-m02 -DynamicMemoryEnabled $false
	I0612 13:38:56.612252    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:38:56.612252    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:56.613264    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-957600-m02 -Count 2
	I0612 13:38:58.785257    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:38:58.785759    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:58.785759    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-957600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m02\boot2docker.iso'
	I0612 13:39:01.349335    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:39:01.349582    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:01.349692    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-957600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m02\disk.vhd'
	I0612 13:39:03.977118    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:39:03.978132    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:03.978132    7444 main.go:141] libmachine: Starting VM...
	I0612 13:39:03.978242    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-957600-m02
	I0612 13:39:07.027954    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:39:07.027954    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:07.027954    7444 main.go:141] libmachine: Waiting for host to start...
	I0612 13:39:07.028148    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:39:09.374184    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:39:09.374940    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:09.375255    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:39:11.950459    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:39:11.951440    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:12.957673    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:39:15.244330    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:39:15.244330    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:15.244330    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:39:17.850575    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:39:17.850685    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:18.857228    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:39:21.102729    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:39:21.102787    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:21.102787    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:39:23.670281    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:39:23.670281    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:24.683065    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:39:26.956345    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:39:26.956555    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:26.956555    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:39:29.589038    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:39:29.589038    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:30.602632    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:39:32.822026    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:39:32.822026    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:32.822779    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:39:35.421893    7444 main.go:141] libmachine: [stdout =====>] : 172.23.201.185
	
	I0612 13:39:35.421893    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:35.422932    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:39:37.559103    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:39:37.559103    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:37.559480    7444 machine.go:94] provisionDockerMachine start ...
	I0612 13:39:37.559480    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:39:39.701029    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:39:39.701029    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:39.701851    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:39:42.269798    7444 main.go:141] libmachine: [stdout =====>] : 172.23.201.185
	
	I0612 13:39:42.269798    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:42.275072    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:39:42.286579    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.201.185 22 <nil> <nil>}
	I0612 13:39:42.286579    7444 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 13:39:42.424161    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 13:39:42.424161    7444 buildroot.go:166] provisioning hostname "ha-957600-m02"
	I0612 13:39:42.424933    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:39:44.554001    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:39:44.554001    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:44.554001    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:39:47.135454    7444 main.go:141] libmachine: [stdout =====>] : 172.23.201.185
	
	I0612 13:39:47.135454    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:47.141610    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:39:47.142322    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.201.185 22 <nil> <nil>}
	I0612 13:39:47.142322    7444 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-957600-m02 && echo "ha-957600-m02" | sudo tee /etc/hostname
	I0612 13:39:47.305875    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-957600-m02
	
	I0612 13:39:47.305987    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:39:49.433522    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:39:49.433923    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:49.434068    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:39:51.964941    7444 main.go:141] libmachine: [stdout =====>] : 172.23.201.185
	
	I0612 13:39:51.964941    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:51.971469    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:39:51.971997    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.201.185 22 <nil> <nil>}
	I0612 13:39:51.971997    7444 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-957600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-957600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-957600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 13:39:52.119423    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 13:39:52.119423    7444 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0612 13:39:52.119423    7444 buildroot.go:174] setting up certificates
	I0612 13:39:52.119423    7444 provision.go:84] configureAuth start
	I0612 13:39:52.119423    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:39:54.239993    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:39:54.240673    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:54.240673    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:39:56.832040    7444 main.go:141] libmachine: [stdout =====>] : 172.23.201.185
	
	I0612 13:39:56.832237    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:56.832333    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:39:59.007402    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:39:59.007402    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:59.007654    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:40:01.551689    7444 main.go:141] libmachine: [stdout =====>] : 172.23.201.185
	
	I0612 13:40:01.554891    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:01.554891    7444 provision.go:143] copyHostCerts
	I0612 13:40:01.555234    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0612 13:40:01.555663    7444 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0612 13:40:01.555743    7444 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0612 13:40:01.556240    7444 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0612 13:40:01.557448    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0612 13:40:01.557733    7444 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0612 13:40:01.557819    7444 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0612 13:40:01.558166    7444 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0612 13:40:01.559021    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0612 13:40:01.559021    7444 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0612 13:40:01.559021    7444 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0612 13:40:01.559862    7444 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0612 13:40:01.560793    7444 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-957600-m02 san=[127.0.0.1 172.23.201.185 ha-957600-m02 localhost minikube]
	I0612 13:40:01.636530    7444 provision.go:177] copyRemoteCerts
	I0612 13:40:01.649537    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 13:40:01.649537    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:40:03.794049    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:40:03.794049    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:03.794412    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:40:06.326325    7444 main.go:141] libmachine: [stdout =====>] : 172.23.201.185
	
	I0612 13:40:06.326325    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:06.326891    7444 sshutil.go:53] new ssh client: &{IP:172.23.201.185 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m02\id_rsa Username:docker}
	I0612 13:40:06.439069    7444 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7891683s)
	I0612 13:40:06.439130    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0612 13:40:06.439689    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 13:40:06.484150    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0612 13:40:06.484590    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0612 13:40:06.529103    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0612 13:40:06.529566    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0612 13:40:06.577929    7444 provision.go:87] duration metric: took 14.4584633s to configureAuth
	I0612 13:40:06.577993    7444 buildroot.go:189] setting minikube options for container-runtime
	I0612 13:40:06.578556    7444 config.go:182] Loaded profile config "ha-957600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 13:40:06.578680    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:40:08.718496    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:40:08.718496    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:08.718677    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:40:11.328555    7444 main.go:141] libmachine: [stdout =====>] : 172.23.201.185
	
	I0612 13:40:11.328555    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:11.334940    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:40:11.335759    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.201.185 22 <nil> <nil>}
	I0612 13:40:11.335759    7444 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0612 13:40:11.478737    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0612 13:40:11.478768    7444 buildroot.go:70] root file system type: tmpfs
	I0612 13:40:11.478960    7444 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0612 13:40:11.478960    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:40:13.660235    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:40:13.660235    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:13.660235    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:40:16.213094    7444 main.go:141] libmachine: [stdout =====>] : 172.23.201.185
	
	I0612 13:40:16.213094    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:16.220956    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:40:16.221130    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.201.185 22 <nil> <nil>}
	I0612 13:40:16.221130    7444 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.23.203.104"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0612 13:40:16.387145    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.23.203.104
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0612 13:40:16.387254    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:40:18.572173    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:40:18.572449    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:18.572449    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:40:21.162854    7444 main.go:141] libmachine: [stdout =====>] : 172.23.201.185
	
	I0612 13:40:21.162854    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:21.169369    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:40:21.169369    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.201.185 22 <nil> <nil>}
	I0612 13:40:21.169369    7444 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0612 13:40:23.352425    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0612 13:40:23.352425    7444 machine.go:97] duration metric: took 45.792807s to provisionDockerMachine
	I0612 13:40:23.352425    7444 client.go:171] duration metric: took 1m56.7947565s to LocalClient.Create
	I0612 13:40:23.352425    7444 start.go:167] duration metric: took 1m56.7956727s to libmachine.API.Create "ha-957600"
	I0612 13:40:23.352425    7444 start.go:293] postStartSetup for "ha-957600-m02" (driver="hyperv")
	I0612 13:40:23.352425    7444 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 13:40:23.365043    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 13:40:23.365043    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:40:25.540820    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:40:25.540820    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:25.540897    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:40:28.136093    7444 main.go:141] libmachine: [stdout =====>] : 172.23.201.185
	
	I0612 13:40:28.136093    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:28.137010    7444 sshutil.go:53] new ssh client: &{IP:172.23.201.185 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m02\id_rsa Username:docker}
	I0612 13:40:28.254816    7444 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8897579s)
	I0612 13:40:28.268285    7444 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 13:40:28.275759    7444 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 13:40:28.275759    7444 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0612 13:40:28.276328    7444 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0612 13:40:28.277200    7444 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> 12802.pem in /etc/ssl/certs
	I0612 13:40:28.277200    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> /etc/ssl/certs/12802.pem
	I0612 13:40:28.290126    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 13:40:28.309175    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem --> /etc/ssl/certs/12802.pem (1708 bytes)
	I0612 13:40:28.360898    7444 start.go:296] duration metric: took 5.008458s for postStartSetup
	I0612 13:40:28.364121    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:40:30.542866    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:40:30.543140    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:30.543140    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:40:33.177427    7444 main.go:141] libmachine: [stdout =====>] : 172.23.201.185
	
	I0612 13:40:33.178226    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:33.178550    7444 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\config.json ...
	I0612 13:40:33.181088    7444 start.go:128] duration metric: took 2m6.6263947s to createHost
	I0612 13:40:33.181088    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:40:35.375898    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:40:35.375898    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:35.375898    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:40:37.910727    7444 main.go:141] libmachine: [stdout =====>] : 172.23.201.185
	
	I0612 13:40:37.911265    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:37.916437    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:40:37.917299    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.201.185 22 <nil> <nil>}
	I0612 13:40:37.917299    7444 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 13:40:38.063816    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718224838.060224079
	
	I0612 13:40:38.063879    7444 fix.go:216] guest clock: 1718224838.060224079
	I0612 13:40:38.063941    7444 fix.go:229] Guest: 2024-06-12 13:40:38.060224079 -0700 PDT Remote: 2024-06-12 13:40:33.1810882 -0700 PDT m=+336.723281701 (delta=4.879135879s)
	I0612 13:40:38.063941    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:40:40.186558    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:40:40.186558    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:40.187600    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:40:42.766690    7444 main.go:141] libmachine: [stdout =====>] : 172.23.201.185
	
	I0612 13:40:42.766690    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:42.773814    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:40:42.773896    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.201.185 22 <nil> <nil>}
	I0612 13:40:42.773896    7444 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1718224838
	I0612 13:40:42.929297    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jun 12 20:40:38 UTC 2024
	
	I0612 13:40:42.929459    7444 fix.go:236] clock set: Wed Jun 12 20:40:38 UTC 2024
	 (err=<nil>)
	I0612 13:40:42.929459    7444 start.go:83] releasing machines lock for "ha-957600-m02", held for 2m16.3747372s
	I0612 13:40:42.929708    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:40:45.128236    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:40:45.128236    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:45.128878    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:40:47.648541    7444 main.go:141] libmachine: [stdout =====>] : 172.23.201.185
	
	I0612 13:40:47.648576    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:47.651841    7444 out.go:177] * Found network options:
	I0612 13:40:47.654656    7444 out.go:177]   - NO_PROXY=172.23.203.104
	W0612 13:40:47.656873    7444 proxy.go:119] fail to check proxy env: Error ip not in block
	I0612 13:40:47.659003    7444 out.go:177]   - NO_PROXY=172.23.203.104
	W0612 13:40:47.663194    7444 proxy.go:119] fail to check proxy env: Error ip not in block
	W0612 13:40:47.665264    7444 proxy.go:119] fail to check proxy env: Error ip not in block
	I0612 13:40:47.667814    7444 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 13:40:47.667814    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:40:47.677154    7444 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0612 13:40:47.677154    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:40:49.867090    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:40:49.867383    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:49.867481    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:40:49.902430    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:40:49.902985    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:49.902985    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:40:52.483377    7444 main.go:141] libmachine: [stdout =====>] : 172.23.201.185
	
	I0612 13:40:52.483377    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:52.483480    7444 sshutil.go:53] new ssh client: &{IP:172.23.201.185 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m02\id_rsa Username:docker}
	I0612 13:40:52.537949    7444 main.go:141] libmachine: [stdout =====>] : 172.23.201.185
	
	I0612 13:40:52.537995    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:52.537995    7444 sshutil.go:53] new ssh client: &{IP:172.23.201.185 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m02\id_rsa Username:docker}
	I0612 13:40:52.578098    7444 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.9009296s)
	W0612 13:40:52.578098    7444 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 13:40:52.591561    7444 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 13:40:52.665688    7444 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 13:40:52.665688    7444 start.go:494] detecting cgroup driver to use...
	I0612 13:40:52.665688    7444 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9978595s)
	I0612 13:40:52.665688    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 13:40:52.714276    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0612 13:40:52.748860    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0612 13:40:52.770275    7444 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0612 13:40:52.782268    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0612 13:40:52.816382    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0612 13:40:52.849636    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0612 13:40:52.882633    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0612 13:40:52.915694    7444 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 13:40:52.947948    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0612 13:40:52.980354    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0612 13:40:53.011933    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0612 13:40:53.051174    7444 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 13:40:53.083502    7444 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 13:40:53.114005    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:40:53.313109    7444 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0612 13:40:53.345887    7444 start.go:494] detecting cgroup driver to use...
	I0612 13:40:53.361415    7444 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0612 13:40:53.402850    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 13:40:53.437336    7444 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 13:40:53.475234    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 13:40:53.509504    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0612 13:40:53.544910    7444 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0612 13:40:53.602806    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0612 13:40:53.626302    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 13:40:53.669913    7444 ssh_runner.go:195] Run: which cri-dockerd
	I0612 13:40:53.689214    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0612 13:40:53.708977    7444 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0612 13:40:53.752588    7444 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0612 13:40:53.944076    7444 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0612 13:40:54.129393    7444 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0612 13:40:54.129535    7444 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0612 13:40:54.179641    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:40:54.378985    7444 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0612 13:40:56.909081    7444 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5300882s)
	I0612 13:40:56.921229    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0612 13:40:56.958759    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0612 13:40:56.993741    7444 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0612 13:40:57.191638    7444 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0612 13:40:57.406959    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:40:57.620110    7444 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0612 13:40:57.663638    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0612 13:40:57.699652    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:40:57.911354    7444 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0612 13:40:58.022124    7444 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0612 13:40:58.037792    7444 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0612 13:40:58.048542    7444 start.go:562] Will wait 60s for crictl version
	I0612 13:40:58.064979    7444 ssh_runner.go:195] Run: which crictl
	I0612 13:40:58.086892    7444 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 13:40:58.142076    7444 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0612 13:40:58.151630    7444 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0612 13:40:58.193697    7444 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0612 13:40:58.228083    7444 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0612 13:40:58.232348    7444 out.go:177]   - env NO_PROXY=172.23.203.104
	I0612 13:40:58.235567    7444 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0612 13:40:58.240807    7444 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0612 13:40:58.240807    7444 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0612 13:40:58.240807    7444 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0612 13:40:58.240807    7444 ip.go:207] Found interface: {Index:16 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:56:a0:18 Flags:up|broadcast|multicast|running}
	I0612 13:40:58.244245    7444 ip.go:210] interface addr: fe80::52c5:dd8:dd1e:a400/64
	I0612 13:40:58.244245    7444 ip.go:210] interface addr: 172.23.192.1/20
	I0612 13:40:58.257640    7444 ssh_runner.go:195] Run: grep 172.23.192.1	host.minikube.internal$ /etc/hosts
	I0612 13:40:58.264589    7444 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 13:40:58.288178    7444 mustload.go:65] Loading cluster: ha-957600
	I0612 13:40:58.288998    7444 config.go:182] Loaded profile config "ha-957600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 13:40:58.289701    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:41:00.427841    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:41:00.428195    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:41:00.428195    7444 host.go:66] Checking if "ha-957600" exists ...
	I0612 13:41:00.428991    7444 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600 for IP: 172.23.201.185
	I0612 13:41:00.428991    7444 certs.go:194] generating shared ca certs ...
	I0612 13:41:00.429066    7444 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:41:00.429712    7444 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0612 13:41:00.430235    7444 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0612 13:41:00.430530    7444 certs.go:256] generating profile certs ...
	I0612 13:41:00.431501    7444 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\client.key
	I0612 13:41:00.431617    7444 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key.7389266d
	I0612 13:41:00.431936    7444 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt.7389266d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.23.203.104 172.23.201.185 172.23.207.254]
	I0612 13:41:00.616300    7444 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt.7389266d ...
	I0612 13:41:00.617316    7444 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt.7389266d: {Name:mk5aa24280130c6f7302d45d6a80b585d49ec1d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:41:00.618835    7444 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key.7389266d ...
	I0612 13:41:00.618835    7444 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key.7389266d: {Name:mke68ba6ace12f6e280ee6403c498da322ea43b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:41:00.619255    7444 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt.7389266d -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt
	I0612 13:41:00.634246    7444 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key.7389266d -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key
	I0612 13:41:00.635317    7444 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.key
	I0612 13:41:00.635317    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0612 13:41:00.635837    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0612 13:41:00.636035    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0612 13:41:00.636338    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0612 13:41:00.636338    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0612 13:41:00.636745    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0612 13:41:00.636946    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0612 13:41:00.637304    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0612 13:41:00.637562    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem (1338 bytes)
	W0612 13:41:00.638159    7444 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280_empty.pem, impossibly tiny 0 bytes
	I0612 13:41:00.638211    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0612 13:41:00.638559    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0612 13:41:00.639272    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0612 13:41:00.639640    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0612 13:41:00.639904    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem (1708 bytes)
	I0612 13:41:00.640287    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0612 13:41:00.640540    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem -> /usr/share/ca-certificates/1280.pem
	I0612 13:41:00.640676    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> /usr/share/ca-certificates/12802.pem
	I0612 13:41:00.640880    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:41:02.782252    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:41:02.782252    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:41:02.782991    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:41:05.335972    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:41:05.335972    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:41:05.336973    7444 sshutil.go:53] new ssh client: &{IP:172.23.203.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\id_rsa Username:docker}
	I0612 13:41:05.441663    7444 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0612 13:41:05.449087    7444 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0612 13:41:05.481855    7444 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0612 13:41:05.488656    7444 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0612 13:41:05.518524    7444 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0612 13:41:05.525519    7444 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0612 13:41:05.559147    7444 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0612 13:41:05.565001    7444 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0612 13:41:05.594613    7444 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0612 13:41:05.606327    7444 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0612 13:41:05.646666    7444 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0612 13:41:05.653742    7444 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0612 13:41:05.670716    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 13:41:05.717982    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 13:41:05.761853    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 13:41:05.808882    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0612 13:41:05.856653    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0612 13:41:05.908010    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0612 13:41:05.971406    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 13:41:06.018165    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 13:41:06.064126    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 13:41:06.110977    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem --> /usr/share/ca-certificates/1280.pem (1338 bytes)
	I0612 13:41:06.157727    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem --> /usr/share/ca-certificates/12802.pem (1708 bytes)
	I0612 13:41:06.203026    7444 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0612 13:41:06.235173    7444 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0612 13:41:06.265600    7444 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0612 13:41:06.296598    7444 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0612 13:41:06.327103    7444 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0612 13:41:06.356794    7444 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0612 13:41:06.386571    7444 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0612 13:41:06.430610    7444 ssh_runner.go:195] Run: openssl version
	I0612 13:41:06.453328    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 13:41:06.486040    7444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 13:41:06.492410    7444 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:00 /usr/share/ca-certificates/minikubeCA.pem
	I0612 13:41:06.503892    7444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 13:41:06.524664    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 13:41:06.557700    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1280.pem && ln -fs /usr/share/ca-certificates/1280.pem /etc/ssl/certs/1280.pem"
	I0612 13:41:06.587045    7444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1280.pem
	I0612 13:41:06.594262    7444 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:15 /usr/share/ca-certificates/1280.pem
	I0612 13:41:06.605040    7444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1280.pem
	I0612 13:41:06.625597    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1280.pem /etc/ssl/certs/51391683.0"
	I0612 13:41:06.655894    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12802.pem && ln -fs /usr/share/ca-certificates/12802.pem /etc/ssl/certs/12802.pem"
	I0612 13:41:06.688148    7444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12802.pem
	I0612 13:41:06.695244    7444 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:15 /usr/share/ca-certificates/12802.pem
	I0612 13:41:06.707600    7444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12802.pem
	I0612 13:41:06.729264    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12802.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 13:41:06.758532    7444 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 13:41:06.765926    7444 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0612 13:41:06.766013    7444 kubeadm.go:928] updating node {m02 172.23.201.185 8443 v1.30.1 docker true true} ...
	I0612 13:41:06.766013    7444 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-957600-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.201.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-957600 Namespace:default APIServerHAVIP:172.23.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 13:41:06.766013    7444 kube-vip.go:115] generating kube-vip config ...
	I0612 13:41:06.777313    7444 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0612 13:41:06.803300    7444 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0612 13:41:06.803501    7444 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.23.207.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0612 13:41:06.814458    7444 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 13:41:06.831503    7444 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0612 13:41:06.842557    7444 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0612 13:41:06.862515    7444 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet
	I0612 13:41:06.862689    7444 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl
	I0612 13:41:06.862689    7444 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm
	I0612 13:41:08.046660    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0612 13:41:08.060659    7444 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0612 13:41:08.068688    7444 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0612 13:41:08.068897    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0612 13:41:12.816200    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0612 13:41:12.827223    7444 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0612 13:41:12.834953    7444 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0612 13:41:12.835061    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0612 13:41:15.978485    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 13:41:16.005729    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0612 13:41:16.019682    7444 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0612 13:41:16.025789    7444 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0612 13:41:16.025789    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0612 13:41:16.649889    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0612 13:41:16.894813    7444 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0612 13:41:16.928910    7444 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 13:41:16.962118    7444 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0612 13:41:17.007494    7444 ssh_runner.go:195] Run: grep 172.23.207.254	control-plane.minikube.internal$ /etc/hosts
	I0612 13:41:17.013839    7444 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.207.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 13:41:17.050446    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:41:17.253198    7444 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 13:41:17.288024    7444 host.go:66] Checking if "ha-957600" exists ...
	I0612 13:41:17.288621    7444 start.go:316] joinCluster: &{Name:ha-957600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-957600 Namespace:default APIServerHAVIP:172.23.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.203.104 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.201.185 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 13:41:17.288621    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0612 13:41:17.289217    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:41:19.430528    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:41:19.430771    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:41:19.430771    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:41:22.055986    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:41:22.055986    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:41:22.056641    7444 sshutil.go:53] new ssh client: &{IP:172.23.203.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\id_rsa Username:docker}
	I0612 13:41:22.353070    7444 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0643514s)
	I0612 13:41:22.353190    7444 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.23.201.185 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0612 13:41:22.353268    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vawtol.6y8lqv0tes4381yl --discovery-token-ca-cert-hash sha256:10c04e0412ada9d72a46398cbb6ecb6de5efcad2d747fb615b7e984406c55dc5 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-957600-m02 --control-plane --apiserver-advertise-address=172.23.201.185 --apiserver-bind-port=8443"
	I0612 13:42:03.415394    7444 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vawtol.6y8lqv0tes4381yl --discovery-token-ca-cert-hash sha256:10c04e0412ada9d72a46398cbb6ecb6de5efcad2d747fb615b7e984406c55dc5 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-957600-m02 --control-plane --apiserver-advertise-address=172.23.201.185 --apiserver-bind-port=8443": (41.0619126s)
	I0612 13:42:03.415394    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0612 13:42:04.209571    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-957600-m02 minikube.k8s.io/updated_at=2024_06_12T13_42_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97 minikube.k8s.io/name=ha-957600 minikube.k8s.io/primary=false
	I0612 13:42:04.382411    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-957600-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0612 13:42:04.571472    7444 start.go:318] duration metric: took 47.2827092s to joinCluster
	I0612 13:42:04.571472    7444 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.23.201.185 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0612 13:42:04.574475    7444 out.go:177] * Verifying Kubernetes components...
	I0612 13:42:04.572398    7444 config.go:182] Loaded profile config "ha-957600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 13:42:04.592293    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:42:04.941232    7444 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 13:42:04.974500    7444 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 13:42:04.975377    7444 kapi.go:59] client config for ha-957600: &rest.Config{Host:"https://172.23.207.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-957600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-957600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x288e1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0612 13:42:04.975513    7444 kubeadm.go:477] Overriding stale ClientConfig host https://172.23.207.254:8443 with https://172.23.203.104:8443
	I0612 13:42:04.975826    7444 node_ready.go:35] waiting up to 6m0s for node "ha-957600-m02" to be "Ready" ...
	I0612 13:42:04.976433    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:04.976433    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:04.976433    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:04.976433    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:04.991070    7444 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0612 13:42:05.490850    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:05.491133    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:05.491245    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:05.491245    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:05.498996    7444 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 13:42:05.981445    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:05.981525    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:05.981561    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:05.981561    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:05.987710    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:42:06.490383    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:06.490556    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:06.490556    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:06.490556    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:06.495874    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:42:06.977248    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:06.977248    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:06.977248    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:06.977248    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:06.982070    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:42:06.983399    7444 node_ready.go:53] node "ha-957600-m02" has status "Ready":"False"
	I0612 13:42:07.483227    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:07.483288    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:07.483288    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:07.483288    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:07.487907    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:42:07.987262    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:07.987262    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:07.987414    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:07.987414    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:07.992696    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:42:08.478978    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:08.478978    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:08.478978    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:08.478978    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:08.484676    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:42:08.986243    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:08.986299    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:08.986299    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:08.986365    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:08.997023    7444 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0612 13:42:08.998160    7444 node_ready.go:53] node "ha-957600-m02" has status "Ready":"False"
	I0612 13:42:09.490893    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:09.491102    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:09.491102    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:09.491102    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:09.496789    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:42:09.980148    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:09.980148    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:09.980148    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:09.980148    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:09.984769    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:42:10.489192    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:10.489192    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:10.489192    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:10.489192    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:10.495410    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:42:10.981540    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:10.981704    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:10.981778    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:10.981778    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:10.988429    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:42:11.489595    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:11.489878    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:11.489878    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:11.489878    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:11.496263    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:42:11.497333    7444 node_ready.go:53] node "ha-957600-m02" has status "Ready":"False"
	I0612 13:42:11.978857    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:11.978857    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:11.978857    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:11.978857    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:11.984810    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:42:12.480844    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:12.481058    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:12.481058    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:12.481058    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:12.486227    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:42:12.987027    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:12.987280    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:12.987280    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:12.987280    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:12.992999    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:42:13.491883    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:13.491948    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:13.491948    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:13.491948    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:13.497419    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:42:13.498102    7444 node_ready.go:53] node "ha-957600-m02" has status "Ready":"False"
	I0612 13:42:13.978307    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:13.978383    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:13.978383    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:13.978383    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:13.983846    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:42:14.480233    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:14.480233    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:14.480313    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:14.480313    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:14.489558    7444 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0612 13:42:14.984706    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:14.984835    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:14.984835    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:14.984835    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:14.991112    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:42:15.485486    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:15.485486    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:15.485580    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:15.485580    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:15.490533    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:42:15.986107    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:15.986198    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:15.986198    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:15.986198    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:15.991198    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:42:15.992562    7444 node_ready.go:53] node "ha-957600-m02" has status "Ready":"False"
	I0612 13:42:16.488385    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:16.488385    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:16.488385    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:16.488385    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:16.494100    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:42:16.990300    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:16.990493    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:16.990493    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:16.990493    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:16.998502    7444 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0612 13:42:17.491946    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:17.491946    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:17.491946    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:17.491946    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:17.496589    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:42:17.979380    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:17.979380    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:17.979380    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:17.979380    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:17.985395    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:42:18.482659    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:18.482659    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:18.482659    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:18.482659    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:18.487250    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:42:18.488959    7444 node_ready.go:49] node "ha-957600-m02" has status "Ready":"True"
	I0612 13:42:18.488959    7444 node_ready.go:38] duration metric: took 13.512555s for node "ha-957600-m02" to be "Ready" ...
	I0612 13:42:18.488959    7444 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 13:42:18.489124    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods
	I0612 13:42:18.489187    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:18.489187    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:18.489187    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:18.496466    7444 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 13:42:18.507757    7444 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fvjdp" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:18.507757    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fvjdp
	I0612 13:42:18.507757    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:18.507757    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:18.507757    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:18.512135    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:42:18.513026    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:42:18.513026    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:18.513026    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:18.513026    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:18.516376    7444 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 13:42:18.518151    7444 pod_ready.go:92] pod "coredns-7db6d8ff4d-fvjdp" in "kube-system" namespace has status "Ready":"True"
	I0612 13:42:18.518246    7444 pod_ready.go:81] duration metric: took 10.4883ms for pod "coredns-7db6d8ff4d-fvjdp" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:18.518246    7444 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wv2wz" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:18.518395    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-wv2wz
	I0612 13:42:18.518434    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:18.518434    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:18.518434    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:18.527176    7444 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0612 13:42:18.528899    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:42:18.528899    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:18.528899    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:18.528899    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:18.533184    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:42:18.534087    7444 pod_ready.go:92] pod "coredns-7db6d8ff4d-wv2wz" in "kube-system" namespace has status "Ready":"True"
	I0612 13:42:18.534194    7444 pod_ready.go:81] duration metric: took 15.8416ms for pod "coredns-7db6d8ff4d-wv2wz" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:18.534194    7444 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-957600" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:18.534194    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-957600
	I0612 13:42:18.534194    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:18.534349    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:18.534349    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:18.537536    7444 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 13:42:18.538806    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:42:18.538806    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:18.538806    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:18.538864    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:18.542190    7444 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 13:42:18.543236    7444 pod_ready.go:92] pod "etcd-ha-957600" in "kube-system" namespace has status "Ready":"True"
	I0612 13:42:18.543236    7444 pod_ready.go:81] duration metric: took 9.0417ms for pod "etcd-ha-957600" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:18.543236    7444 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-957600-m02" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:18.543437    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-957600-m02
	I0612 13:42:18.543513    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:18.543513    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:18.543513    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:18.548293    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:42:18.549284    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:18.549284    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:18.549349    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:18.549349    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:18.552637    7444 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 13:42:18.553964    7444 pod_ready.go:92] pod "etcd-ha-957600-m02" in "kube-system" namespace has status "Ready":"True"
	I0612 13:42:18.553964    7444 pod_ready.go:81] duration metric: took 10.7277ms for pod "etcd-ha-957600-m02" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:18.553964    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-957600" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:18.683596    7444 request.go:629] Waited for 129.2887ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957600
	I0612 13:42:18.683837    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957600
	I0612 13:42:18.683837    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:18.683837    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:18.683837    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:18.689706    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:42:18.889218    7444 request.go:629] Waited for 198.4046ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:42:18.889619    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:42:18.889619    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:18.889690    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:18.889690    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:18.894806    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:42:18.896026    7444 pod_ready.go:92] pod "kube-apiserver-ha-957600" in "kube-system" namespace has status "Ready":"True"
	I0612 13:42:18.896026    7444 pod_ready.go:81] duration metric: took 342.0616ms for pod "kube-apiserver-ha-957600" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:18.896026    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-957600-m02" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:19.094154    7444 request.go:629] Waited for 197.5781ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957600-m02
	I0612 13:42:19.094626    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957600-m02
	I0612 13:42:19.094664    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:19.094664    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:19.094664    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:19.103248    7444 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0612 13:42:19.284668    7444 request.go:629] Waited for 180.4139ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:19.284945    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:19.284945    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:19.284945    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:19.284945    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:19.293798    7444 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0612 13:42:19.293798    7444 pod_ready.go:92] pod "kube-apiserver-ha-957600-m02" in "kube-system" namespace has status "Ready":"True"
	I0612 13:42:19.293798    7444 pod_ready.go:81] duration metric: took 397.7707ms for pod "kube-apiserver-ha-957600-m02" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:19.294683    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-957600" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:19.490907    7444 request.go:629] Waited for 196.2234ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957600
	I0612 13:42:19.490907    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957600
	I0612 13:42:19.490907    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:19.490907    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:19.490907    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:19.499887    7444 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0612 13:42:19.686951    7444 request.go:629] Waited for 185.8814ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:42:19.687196    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:42:19.687196    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:19.687196    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:19.687196    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:19.694066    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:42:19.694814    7444 pod_ready.go:92] pod "kube-controller-manager-ha-957600" in "kube-system" namespace has status "Ready":"True"
	I0612 13:42:19.694814    7444 pod_ready.go:81] duration metric: took 400.1304ms for pod "kube-controller-manager-ha-957600" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:19.694814    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-957600-m02" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:19.892518    7444 request.go:629] Waited for 197.5558ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957600-m02
	I0612 13:42:19.892732    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957600-m02
	I0612 13:42:19.892732    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:19.892732    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:19.892732    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:19.898860    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:42:20.082771    7444 request.go:629] Waited for 182.5025ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:20.082875    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:20.082875    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:20.082875    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:20.082875    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:20.088067    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:42:20.089403    7444 pod_ready.go:92] pod "kube-controller-manager-ha-957600-m02" in "kube-system" namespace has status "Ready":"True"
	I0612 13:42:20.089488    7444 pod_ready.go:81] duration metric: took 394.6723ms for pod "kube-controller-manager-ha-957600-m02" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:20.089488    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j29r7" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:20.285117    7444 request.go:629] Waited for 195.3458ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j29r7
	I0612 13:42:20.285376    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j29r7
	I0612 13:42:20.285376    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:20.285376    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:20.285376    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:20.291100    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:42:20.487541    7444 request.go:629] Waited for 194.2676ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:20.487670    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:20.487670    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:20.487845    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:20.487845    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:20.492030    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:42:20.493938    7444 pod_ready.go:92] pod "kube-proxy-j29r7" in "kube-system" namespace has status "Ready":"True"
	I0612 13:42:20.494541    7444 pod_ready.go:81] duration metric: took 405.0009ms for pod "kube-proxy-j29r7" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:20.494541    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z94m6" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:20.690157    7444 request.go:629] Waited for 195.4742ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z94m6
	I0612 13:42:20.690414    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z94m6
	I0612 13:42:20.690414    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:20.690414    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:20.690414    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:20.696109    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:42:20.893067    7444 request.go:629] Waited for 195.3401ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:42:20.893402    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:42:20.893470    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:20.893470    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:20.893470    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:20.910684    7444 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0612 13:42:20.911655    7444 pod_ready.go:92] pod "kube-proxy-z94m6" in "kube-system" namespace has status "Ready":"True"
	I0612 13:42:20.911655    7444 pod_ready.go:81] duration metric: took 417.1123ms for pod "kube-proxy-z94m6" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:20.911655    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-957600" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:21.093301    7444 request.go:629] Waited for 181.3269ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957600
	I0612 13:42:21.093450    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957600
	I0612 13:42:21.093450    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:21.093450    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:21.093450    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:21.104317    7444 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0612 13:42:21.282964    7444 request.go:629] Waited for 177.529ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:42:21.282964    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:42:21.282964    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:21.282964    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:21.282964    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:21.289000    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:42:21.289900    7444 pod_ready.go:92] pod "kube-scheduler-ha-957600" in "kube-system" namespace has status "Ready":"True"
	I0612 13:42:21.289985    7444 pod_ready.go:81] duration metric: took 378.3287ms for pod "kube-scheduler-ha-957600" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:21.289985    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-957600-m02" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:21.484773    7444 request.go:629] Waited for 194.6909ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957600-m02
	I0612 13:42:21.485299    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957600-m02
	I0612 13:42:21.485299    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:21.485299    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:21.485299    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:21.492772    7444 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 13:42:21.688684    7444 request.go:629] Waited for 194.5103ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:21.689010    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:21.689010    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:21.689092    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:21.689092    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:21.698560    7444 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0612 13:42:21.700002    7444 pod_ready.go:92] pod "kube-scheduler-ha-957600-m02" in "kube-system" namespace has status "Ready":"True"
	I0612 13:42:21.700033    7444 pod_ready.go:81] duration metric: took 410.047ms for pod "kube-scheduler-ha-957600-m02" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:21.700033    7444 pod_ready.go:38] duration metric: took 3.2109776s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 13:42:21.700115    7444 api_server.go:52] waiting for apiserver process to appear ...
	I0612 13:42:21.712208    7444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 13:42:21.743028    7444 api_server.go:72] duration metric: took 17.1715046s to wait for apiserver process to appear ...
	I0612 13:42:21.743192    7444 api_server.go:88] waiting for apiserver healthz status ...
	I0612 13:42:21.743192    7444 api_server.go:253] Checking apiserver healthz at https://172.23.203.104:8443/healthz ...
	I0612 13:42:21.752211    7444 api_server.go:279] https://172.23.203.104:8443/healthz returned 200:
	ok
	I0612 13:42:21.753277    7444 round_trippers.go:463] GET https://172.23.203.104:8443/version
	I0612 13:42:21.753277    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:21.753277    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:21.753277    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:21.755281    7444 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 13:42:21.755759    7444 api_server.go:141] control plane version: v1.30.1
	I0612 13:42:21.755859    7444 api_server.go:131] duration metric: took 12.6333ms to wait for apiserver health ...
	I0612 13:42:21.755892    7444 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 13:42:21.891895    7444 request.go:629] Waited for 135.9031ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods
	I0612 13:42:21.891895    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods
	I0612 13:42:21.891895    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:21.892245    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:21.892245    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:21.902090    7444 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0612 13:42:21.909995    7444 system_pods.go:59] 17 kube-system pods found
	I0612 13:42:21.910053    7444 system_pods.go:61] "coredns-7db6d8ff4d-fvjdp" [6cb59655-8c1c-493a-89ee-b4ae9ceacdbb] Running
	I0612 13:42:21.910053    7444 system_pods.go:61] "coredns-7db6d8ff4d-wv2wz" [2c2ce90f-b175-4ea7-a936-878c326f66af] Running
	I0612 13:42:21.910131    7444 system_pods.go:61] "etcd-ha-957600" [7cce4e7e-9ea8-48f3-b7f5-dc4c445cfe5d] Running
	I0612 13:42:21.910131    7444 system_pods.go:61] "etcd-ha-957600-m02" [fa3c8b8b-4744-4a4f-8025-44485b3a7a5f] Running
	I0612 13:42:21.910131    7444 system_pods.go:61] "kindnet-54xjp" [cf89e4c7-5d54-48fb-9a94-76364e2f3d3c] Running
	I0612 13:42:21.910131    7444 system_pods.go:61] "kindnet-gdk8g" [0eac7aaf-2341-4580-92d1-ea700cf2fa0f] Running
	I0612 13:42:21.910131    7444 system_pods.go:61] "kube-apiserver-ha-957600" [14343c48-f30d-430c-81e0-24b68835b4fd] Running
	I0612 13:42:21.910131    7444 system_pods.go:61] "kube-apiserver-ha-957600-m02" [3ba7d864-6b01-4152-8027-2fe8e0d5d6bb] Running
	I0612 13:42:21.910131    7444 system_pods.go:61] "kube-controller-manager-ha-957600" [3cc0e64f-a1d7-4062-b78a-b9de960cf935] Running
	I0612 13:42:21.910131    7444 system_pods.go:61] "kube-controller-manager-ha-957600-m02" [fb9dba99-8e76-4c2f-b427-de3fee7d0300] Running
	I0612 13:42:21.910208    7444 system_pods.go:61] "kube-proxy-j29r7" [e87fe1ac-6577-44e3-af8f-c28e878fea08] Running
	I0612 13:42:21.910208    7444 system_pods.go:61] "kube-proxy-z94m6" [cdd33d94-1a1c-4038-aeda-0c6e1d68e559] Running
	I0612 13:42:21.910229    7444 system_pods.go:61] "kube-scheduler-ha-957600" [28ad5883-d593-42a7-952f-0038a7bb25d6] Running
	I0612 13:42:21.910229    7444 system_pods.go:61] "kube-scheduler-ha-957600-m02" [d3a27ea9-a208-4278-8a50-332971e8a78c] Running
	I0612 13:42:21.910229    7444 system_pods.go:61] "kube-vip-ha-957600" [2780187a-2cd6-43da-93bd-73c0dc959228] Running
	I0612 13:42:21.910229    7444 system_pods.go:61] "kube-vip-ha-957600-m02" [0908b051-1096-41ae-b457-36b2162ae907] Running
	I0612 13:42:21.910229    7444 system_pods.go:61] "storage-provisioner" [9a5d025e-c240-4084-a1bd-1db96161d3b3] Running
	I0612 13:42:21.910229    7444 system_pods.go:74] duration metric: took 154.3369ms to wait for pod list to return data ...
	I0612 13:42:21.910229    7444 default_sa.go:34] waiting for default service account to be created ...
	I0612 13:42:22.094744    7444 request.go:629] Waited for 184.0501ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/default/serviceaccounts
	I0612 13:42:22.094744    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/default/serviceaccounts
	I0612 13:42:22.094744    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:22.094744    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:22.094744    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:22.101352    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:42:22.102554    7444 default_sa.go:45] found service account: "default"
	I0612 13:42:22.102655    7444 default_sa.go:55] duration metric: took 192.3608ms for default service account to be created ...
	I0612 13:42:22.102655    7444 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 13:42:22.296448    7444 request.go:629] Waited for 193.4167ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods
	I0612 13:42:22.296678    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods
	I0612 13:42:22.296863    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:22.296863    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:22.296955    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:22.305411    7444 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0612 13:42:22.312789    7444 system_pods.go:86] 17 kube-system pods found
	I0612 13:42:22.312789    7444 system_pods.go:89] "coredns-7db6d8ff4d-fvjdp" [6cb59655-8c1c-493a-89ee-b4ae9ceacdbb] Running
	I0612 13:42:22.312789    7444 system_pods.go:89] "coredns-7db6d8ff4d-wv2wz" [2c2ce90f-b175-4ea7-a936-878c326f66af] Running
	I0612 13:42:22.312789    7444 system_pods.go:89] "etcd-ha-957600" [7cce4e7e-9ea8-48f3-b7f5-dc4c445cfe5d] Running
	I0612 13:42:22.312789    7444 system_pods.go:89] "etcd-ha-957600-m02" [fa3c8b8b-4744-4a4f-8025-44485b3a7a5f] Running
	I0612 13:42:22.312789    7444 system_pods.go:89] "kindnet-54xjp" [cf89e4c7-5d54-48fb-9a94-76364e2f3d3c] Running
	I0612 13:42:22.312789    7444 system_pods.go:89] "kindnet-gdk8g" [0eac7aaf-2341-4580-92d1-ea700cf2fa0f] Running
	I0612 13:42:22.312789    7444 system_pods.go:89] "kube-apiserver-ha-957600" [14343c48-f30d-430c-81e0-24b68835b4fd] Running
	I0612 13:42:22.312789    7444 system_pods.go:89] "kube-apiserver-ha-957600-m02" [3ba7d864-6b01-4152-8027-2fe8e0d5d6bb] Running
	I0612 13:42:22.312789    7444 system_pods.go:89] "kube-controller-manager-ha-957600" [3cc0e64f-a1d7-4062-b78a-b9de960cf935] Running
	I0612 13:42:22.312789    7444 system_pods.go:89] "kube-controller-manager-ha-957600-m02" [fb9dba99-8e76-4c2f-b427-de3fee7d0300] Running
	I0612 13:42:22.312789    7444 system_pods.go:89] "kube-proxy-j29r7" [e87fe1ac-6577-44e3-af8f-c28e878fea08] Running
	I0612 13:42:22.312789    7444 system_pods.go:89] "kube-proxy-z94m6" [cdd33d94-1a1c-4038-aeda-0c6e1d68e559] Running
	I0612 13:42:22.312789    7444 system_pods.go:89] "kube-scheduler-ha-957600" [28ad5883-d593-42a7-952f-0038a7bb25d6] Running
	I0612 13:42:22.313384    7444 system_pods.go:89] "kube-scheduler-ha-957600-m02" [d3a27ea9-a208-4278-8a50-332971e8a78c] Running
	I0612 13:42:22.313384    7444 system_pods.go:89] "kube-vip-ha-957600" [2780187a-2cd6-43da-93bd-73c0dc959228] Running
	I0612 13:42:22.313384    7444 system_pods.go:89] "kube-vip-ha-957600-m02" [0908b051-1096-41ae-b457-36b2162ae907] Running
	I0612 13:42:22.313384    7444 system_pods.go:89] "storage-provisioner" [9a5d025e-c240-4084-a1bd-1db96161d3b3] Running
	I0612 13:42:22.313465    7444 system_pods.go:126] duration metric: took 210.8099ms to wait for k8s-apps to be running ...
	I0612 13:42:22.313465    7444 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 13:42:22.324004    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 13:42:22.349235    7444 system_svc.go:56] duration metric: took 35.7694ms WaitForService to wait for kubelet
	I0612 13:42:22.349392    7444 kubeadm.go:576] duration metric: took 17.7778671s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 13:42:22.349392    7444 node_conditions.go:102] verifying NodePressure condition ...
	I0612 13:42:22.483364    7444 request.go:629] Waited for 133.7586ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes
	I0612 13:42:22.483546    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes
	I0612 13:42:22.483546    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:22.483546    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:22.483696    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:22.489070    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:42:22.489844    7444 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 13:42:22.489844    7444 node_conditions.go:123] node cpu capacity is 2
	I0612 13:42:22.489844    7444 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 13:42:22.489844    7444 node_conditions.go:123] node cpu capacity is 2
	I0612 13:42:22.489844    7444 node_conditions.go:105] duration metric: took 140.4516ms to run NodePressure ...
	I0612 13:42:22.489844    7444 start.go:240] waiting for startup goroutines ...
	I0612 13:42:22.489844    7444 start.go:254] writing updated cluster config ...
	I0612 13:42:22.494035    7444 out.go:177] 
	I0612 13:42:22.507717    7444 config.go:182] Loaded profile config "ha-957600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 13:42:22.508367    7444 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\config.json ...
	I0612 13:42:22.516908    7444 out.go:177] * Starting "ha-957600-m03" control-plane node in "ha-957600" cluster
	I0612 13:42:22.521852    7444 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0612 13:42:22.521852    7444 cache.go:56] Caching tarball of preloaded images
	I0612 13:42:22.522341    7444 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0612 13:42:22.522655    7444 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0612 13:42:22.522886    7444 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\config.json ...
	I0612 13:42:22.524188    7444 start.go:360] acquireMachinesLock for ha-957600-m03: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 13:42:22.525169    7444 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-957600-m03"
	I0612 13:42:22.525169    7444 start.go:93] Provisioning new machine with config: &{Name:ha-957600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-957600 Namespace:default APIServerHAVIP:172.23.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.203.104 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.201.185 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0612 13:42:22.525169    7444 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0612 13:42:22.531990    7444 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0612 13:42:22.531990    7444 start.go:159] libmachine.API.Create for "ha-957600" (driver="hyperv")
	I0612 13:42:22.532753    7444 client.go:168] LocalClient.Create starting
	I0612 13:42:22.533006    7444 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0612 13:42:22.533483    7444 main.go:141] libmachine: Decoding PEM data...
	I0612 13:42:22.533544    7444 main.go:141] libmachine: Parsing certificate...
	I0612 13:42:22.533720    7444 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0612 13:42:22.533720    7444 main.go:141] libmachine: Decoding PEM data...
	I0612 13:42:22.533720    7444 main.go:141] libmachine: Parsing certificate...
	I0612 13:42:22.533720    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0612 13:42:24.505639    7444 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0612 13:42:24.505639    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:42:24.505639    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0612 13:42:26.265969    7444 main.go:141] libmachine: [stdout =====>] : False
	
	I0612 13:42:26.266443    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:42:26.266443    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0612 13:42:27.781494    7444 main.go:141] libmachine: [stdout =====>] : True
	
	I0612 13:42:27.781494    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:42:27.782429    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0612 13:42:31.603434    7444 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0612 13:42:31.603434    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:42:31.605468    7444 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso...
	I0612 13:42:32.076997    7444 main.go:141] libmachine: Creating SSH key...
	I0612 13:42:32.179558    7444 main.go:141] libmachine: Creating VM...
	I0612 13:42:32.179558    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0612 13:42:35.145942    7444 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0612 13:42:35.145942    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:42:35.145942    7444 main.go:141] libmachine: Using switch "Default Switch"
	I0612 13:42:35.147132    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0612 13:42:36.934906    7444 main.go:141] libmachine: [stdout =====>] : True
	
	I0612 13:42:36.934906    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:42:36.935744    7444 main.go:141] libmachine: Creating VHD
	I0612 13:42:36.935744    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0612 13:42:40.771426    7444 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 52CA2E3A-85EC-4D22-8835-02E0CFA6A387
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0612 13:42:40.771491    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:42:40.771491    7444 main.go:141] libmachine: Writing magic tar header
	I0612 13:42:40.771491    7444 main.go:141] libmachine: Writing SSH key tar header
	I0612 13:42:40.780544    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0612 13:42:44.096351    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:42:44.096351    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:42:44.097204    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m03\disk.vhd' -SizeBytes 20000MB
	I0612 13:42:46.644613    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:42:46.644613    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:42:46.644613    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-957600-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0612 13:42:50.361360    7444 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-957600-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0612 13:42:50.361360    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:42:50.362337    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-957600-m03 -DynamicMemoryEnabled $false
	I0612 13:42:52.625998    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:42:52.625998    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:42:52.625998    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-957600-m03 -Count 2
	I0612 13:42:54.835871    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:42:54.836882    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:42:54.836996    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-957600-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m03\boot2docker.iso'
	I0612 13:42:57.506138    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:42:57.506236    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:42:57.506322    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-957600-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m03\disk.vhd'
	I0612 13:43:00.270226    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:43:00.271052    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:00.271052    7444 main.go:141] libmachine: Starting VM...
	I0612 13:43:00.271052    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-957600-m03
	I0612 13:43:03.433514    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:43:03.433514    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:03.433514    7444 main.go:141] libmachine: Waiting for host to start...
	I0612 13:43:03.433514    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:43:05.782081    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:43:05.782081    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:05.782081    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:43:08.442413    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:43:08.442507    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:09.445627    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:43:11.759213    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:43:11.759213    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:11.759457    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:43:14.393086    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:43:14.393086    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:15.401413    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:43:17.670508    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:43:17.670508    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:17.670508    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:43:20.243279    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:43:20.243279    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:21.252210    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:43:23.501968    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:43:23.501968    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:23.502169    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:43:26.130542    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:43:26.130542    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:27.145341    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:43:29.415229    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:43:29.415457    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:29.415557    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:43:32.057468    7444 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 13:43:32.057468    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:32.058327    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:43:34.261556    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:43:34.261556    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:34.262555    7444 machine.go:94] provisionDockerMachine start ...
	I0612 13:43:34.262619    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:43:36.456341    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:43:36.457304    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:36.457304    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:43:39.069760    7444 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 13:43:39.069760    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:39.076699    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:43:39.076868    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.207.166 22 <nil> <nil>}
	I0612 13:43:39.076868    7444 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 13:43:39.193656    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 13:43:39.193656    7444 buildroot.go:166] provisioning hostname "ha-957600-m03"
	I0612 13:43:39.193813    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:43:41.372809    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:43:41.372809    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:41.373348    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:43:43.987887    7444 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 13:43:43.988118    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:43.993368    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:43:43.993759    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.207.166 22 <nil> <nil>}
	I0612 13:43:43.993759    7444 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-957600-m03 && echo "ha-957600-m03" | sudo tee /etc/hostname
	I0612 13:43:44.141641    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-957600-m03
	
	I0612 13:43:44.141745    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:43:46.301068    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:43:46.301068    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:46.301306    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:43:48.885920    7444 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 13:43:48.885920    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:48.895607    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:43:48.895607    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.207.166 22 <nil> <nil>}
	I0612 13:43:48.895607    7444 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-957600-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-957600-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-957600-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 13:43:49.033469    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 13:43:49.033469    7444 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0612 13:43:49.033469    7444 buildroot.go:174] setting up certificates
	I0612 13:43:49.033469    7444 provision.go:84] configureAuth start
	I0612 13:43:49.034029    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:43:51.225358    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:43:51.226309    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:51.226309    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:43:53.821502    7444 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 13:43:53.821502    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:53.821596    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:43:56.006989    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:43:56.008019    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:56.008130    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:43:58.653093    7444 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 13:43:58.653423    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:58.653423    7444 provision.go:143] copyHostCerts
	I0612 13:43:58.653588    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0612 13:43:58.654174    7444 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0612 13:43:58.654308    7444 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0612 13:43:58.654930    7444 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0612 13:43:58.656568    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0612 13:43:58.656969    7444 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0612 13:43:58.656969    7444 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0612 13:43:58.657511    7444 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0612 13:43:58.658868    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0612 13:43:58.659300    7444 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0612 13:43:58.659300    7444 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0612 13:43:58.659795    7444 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0612 13:43:58.660750    7444 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-957600-m03 san=[127.0.0.1 172.23.207.166 ha-957600-m03 localhost minikube]
	I0612 13:43:58.872014    7444 provision.go:177] copyRemoteCerts
	I0612 13:43:58.885906    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 13:43:58.886119    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:44:01.066180    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:44:01.066180    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:01.066336    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:44:03.658867    7444 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 13:44:03.659817    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:03.659987    7444 sshutil.go:53] new ssh client: &{IP:172.23.207.166 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m03\id_rsa Username:docker}
	I0612 13:44:03.767203    7444 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8812834s)
	I0612 13:44:03.767203    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0612 13:44:03.768115    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 13:44:03.817387    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0612 13:44:03.817755    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0612 13:44:03.862826    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0612 13:44:03.863127    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0612 13:44:03.912402    7444 provision.go:87] duration metric: took 14.8788897s to configureAuth
	I0612 13:44:03.912402    7444 buildroot.go:189] setting minikube options for container-runtime
	I0612 13:44:03.913300    7444 config.go:182] Loaded profile config "ha-957600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 13:44:03.913497    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:44:06.070941    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:44:06.070998    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:06.070998    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:44:08.709965    7444 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 13:44:08.709965    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:08.716687    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:44:08.717262    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.207.166 22 <nil> <nil>}
	I0612 13:44:08.717262    7444 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0612 13:44:08.832615    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0612 13:44:08.832615    7444 buildroot.go:70] root file system type: tmpfs
	I0612 13:44:08.832615    7444 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0612 13:44:08.832615    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:44:11.021634    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:44:11.021634    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:11.022437    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:44:13.684358    7444 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 13:44:13.684358    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:13.690418    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:44:13.691099    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.207.166 22 <nil> <nil>}
	I0612 13:44:13.691099    7444 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.23.203.104"
	Environment="NO_PROXY=172.23.203.104,172.23.201.185"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0612 13:44:13.845756    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.23.203.104
	Environment=NO_PROXY=172.23.203.104,172.23.201.185
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0612 13:44:13.845756    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:44:16.086043    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:44:16.086536    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:16.086642    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:44:18.749436    7444 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 13:44:18.749436    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:18.755392    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:44:18.755392    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.207.166 22 <nil> <nil>}
	I0612 13:44:18.755392    7444 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0612 13:44:20.985330    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0612 13:44:20.985430    7444 machine.go:97] duration metric: took 46.7227401s to provisionDockerMachine
	I0612 13:44:20.985430    7444 client.go:171] duration metric: took 1m58.4523263s to LocalClient.Create
	I0612 13:44:20.985430    7444 start.go:167] duration metric: took 1m58.4530899s to libmachine.API.Create "ha-957600"
	I0612 13:44:20.985430    7444 start.go:293] postStartSetup for "ha-957600-m03" (driver="hyperv")
	I0612 13:44:20.985588    7444 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 13:44:20.997134    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 13:44:20.997134    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:44:23.223240    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:44:23.224008    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:23.224008    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:44:25.860480    7444 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 13:44:25.860480    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:25.860857    7444 sshutil.go:53] new ssh client: &{IP:172.23.207.166 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m03\id_rsa Username:docker}
	I0612 13:44:25.979668    7444 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.982496s)
	I0612 13:44:25.993145    7444 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 13:44:26.001167    7444 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 13:44:26.001167    7444 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0612 13:44:26.001360    7444 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0612 13:44:26.002216    7444 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> 12802.pem in /etc/ssl/certs
	I0612 13:44:26.002356    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> /etc/ssl/certs/12802.pem
	I0612 13:44:26.013568    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 13:44:26.032719    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem --> /etc/ssl/certs/12802.pem (1708 bytes)
	I0612 13:44:26.081340    7444 start.go:296] duration metric: took 5.0958945s for postStartSetup
	I0612 13:44:26.084511    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:44:28.342570    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:44:28.343609    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:28.343609    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:44:30.949745    7444 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 13:44:30.950009    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:30.950082    7444 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\config.json ...
	I0612 13:44:30.953837    7444 start.go:128] duration metric: took 2m8.4282883s to createHost
	I0612 13:44:30.953940    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:44:33.174218    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:44:33.174218    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:33.174218    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:44:35.809220    7444 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 13:44:35.809220    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:35.815411    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:44:35.815939    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.207.166 22 <nil> <nil>}
	I0612 13:44:35.816109    7444 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 13:44:35.940580    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718225075.920684081
	
	I0612 13:44:35.940580    7444 fix.go:216] guest clock: 1718225075.920684081
	I0612 13:44:35.940580    7444 fix.go:229] Guest: 2024-06-12 13:44:35.920684081 -0700 PDT Remote: 2024-06-12 13:44:30.9539401 -0700 PDT m=+574.495426101 (delta=4.966743981s)
	I0612 13:44:35.940580    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:44:38.145837    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:44:38.145837    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:38.145837    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:44:40.742046    7444 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 13:44:40.743040    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:40.743696    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:44:40.747905    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.207.166 22 <nil> <nil>}
	I0612 13:44:40.747940    7444 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1718225075
	I0612 13:44:40.892181    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jun 12 20:44:35 UTC 2024
	
	I0612 13:44:40.892181    7444 fix.go:236] clock set: Wed Jun 12 20:44:35 UTC 2024
	 (err=<nil>)
	I0612 13:44:40.892181    7444 start.go:83] releasing machines lock for "ha-957600-m03", held for 2m18.3666034s
	I0612 13:44:40.892181    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:44:43.089516    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:44:43.090341    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:43.090402    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:44:45.711791    7444 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 13:44:45.712707    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:45.715499    7444 out.go:177] * Found network options:
	I0612 13:44:45.721808    7444 out.go:177]   - NO_PROXY=172.23.203.104,172.23.201.185
	W0612 13:44:45.724707    7444 proxy.go:119] fail to check proxy env: Error ip not in block
	W0612 13:44:45.724707    7444 proxy.go:119] fail to check proxy env: Error ip not in block
	I0612 13:44:45.729901    7444 out.go:177]   - NO_PROXY=172.23.203.104,172.23.201.185
	W0612 13:44:45.732339    7444 proxy.go:119] fail to check proxy env: Error ip not in block
	W0612 13:44:45.732339    7444 proxy.go:119] fail to check proxy env: Error ip not in block
	W0612 13:44:45.733351    7444 proxy.go:119] fail to check proxy env: Error ip not in block
	W0612 13:44:45.733351    7444 proxy.go:119] fail to check proxy env: Error ip not in block
	I0612 13:44:45.736757    7444 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 13:44:45.736927    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:44:45.752938    7444 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0612 13:44:45.752938    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:44:48.002880    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:44:48.002880    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:48.002880    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:44:48.002880    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:48.002880    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:44:48.002880    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:44:50.879026    7444 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 13:44:50.879026    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:50.879718    7444 sshutil.go:53] new ssh client: &{IP:172.23.207.166 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m03\id_rsa Username:docker}
	I0612 13:44:50.904562    7444 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 13:44:50.904562    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:50.905562    7444 sshutil.go:53] new ssh client: &{IP:172.23.207.166 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m03\id_rsa Username:docker}
	I0612 13:44:51.048771    7444 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3119985s)
	I0612 13:44:51.048771    7444 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2958182s)
	W0612 13:44:51.048886    7444 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 13:44:51.063457    7444 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 13:44:51.096537    7444 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 13:44:51.096621    7444 start.go:494] detecting cgroup driver to use...
	I0612 13:44:51.096908    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 13:44:51.151011    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0612 13:44:51.186637    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0612 13:44:51.208422    7444 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0612 13:44:51.221177    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0612 13:44:51.255135    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0612 13:44:51.290804    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0612 13:44:51.328740    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0612 13:44:51.364911    7444 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 13:44:51.396840    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0612 13:44:51.429986    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0612 13:44:51.463908    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0612 13:44:51.498030    7444 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 13:44:51.533753    7444 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 13:44:51.569682    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:44:51.798092    7444 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0612 13:44:51.832181    7444 start.go:494] detecting cgroup driver to use...
	I0612 13:44:51.846261    7444 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0612 13:44:51.887558    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 13:44:51.928349    7444 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 13:44:51.972441    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 13:44:52.013595    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0612 13:44:52.051544    7444 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0612 13:44:52.114982    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0612 13:44:52.142046    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 13:44:52.189762    7444 ssh_runner.go:195] Run: which cri-dockerd
	I0612 13:44:52.208763    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0612 13:44:52.230310    7444 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0612 13:44:52.279514    7444 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0612 13:44:52.480453    7444 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0612 13:44:52.662708    7444 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0612 13:44:52.663717    7444 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0612 13:44:52.704709    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:44:52.919478    7444 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0612 13:44:55.453209    7444 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5336437s)
	I0612 13:44:55.463979    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0612 13:44:55.497551    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0612 13:44:55.532784    7444 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0612 13:44:55.744138    7444 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0612 13:44:55.947812    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:44:56.148634    7444 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0612 13:44:56.190414    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0612 13:44:56.226411    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:44:56.429336    7444 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0612 13:44:56.534902    7444 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0612 13:44:56.545119    7444 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0612 13:44:56.554119    7444 start.go:562] Will wait 60s for crictl version
	I0612 13:44:56.565760    7444 ssh_runner.go:195] Run: which crictl
	I0612 13:44:56.583915    7444 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 13:44:56.638185    7444 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0612 13:44:56.647338    7444 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0612 13:44:56.697196    7444 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0612 13:44:56.734658    7444 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0612 13:44:56.738131    7444 out.go:177]   - env NO_PROXY=172.23.203.104
	I0612 13:44:56.741132    7444 out.go:177]   - env NO_PROXY=172.23.203.104,172.23.201.185
	I0612 13:44:56.745128    7444 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0612 13:44:56.750131    7444 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0612 13:44:56.750131    7444 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0612 13:44:56.750131    7444 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0612 13:44:56.750131    7444 ip.go:207] Found interface: {Index:16 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:56:a0:18 Flags:up|broadcast|multicast|running}
	I0612 13:44:56.753128    7444 ip.go:210] interface addr: fe80::52c5:dd8:dd1e:a400/64
	I0612 13:44:56.753564    7444 ip.go:210] interface addr: 172.23.192.1/20
	I0612 13:44:56.764980    7444 ssh_runner.go:195] Run: grep 172.23.192.1	host.minikube.internal$ /etc/hosts
	I0612 13:44:56.771569    7444 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 13:44:56.792581    7444 mustload.go:65] Loading cluster: ha-957600
	I0612 13:44:56.793614    7444 config.go:182] Loaded profile config "ha-957600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 13:44:56.793614    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:44:58.932880    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:44:58.933852    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:58.933905    7444 host.go:66] Checking if "ha-957600" exists ...
	I0612 13:44:58.934659    7444 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600 for IP: 172.23.207.166
	I0612 13:44:58.934659    7444 certs.go:194] generating shared ca certs ...
	I0612 13:44:58.934659    7444 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:44:58.935391    7444 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0612 13:44:58.935758    7444 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0612 13:44:58.936018    7444 certs.go:256] generating profile certs ...
	I0612 13:44:58.936801    7444 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\client.key
	I0612 13:44:58.936978    7444 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key.d3d55635
	I0612 13:44:58.937059    7444 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt.d3d55635 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.23.203.104 172.23.201.185 172.23.207.166 172.23.207.254]
	I0612 13:44:59.233230    7444 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt.d3d55635 ...
	I0612 13:44:59.233230    7444 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt.d3d55635: {Name:mkf1927f6658f26a3c5c8cdc9941635a8db96e59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:44:59.235333    7444 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key.d3d55635 ...
	I0612 13:44:59.235333    7444 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key.d3d55635: {Name:mk6d15b5e0913ab7adc90bd98bcfcea07d9da2f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:44:59.235806    7444 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt.d3d55635 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt
	I0612 13:44:59.247779    7444 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key.d3d55635 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key
	I0612 13:44:59.248774    7444 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.key
	I0612 13:44:59.248774    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0612 13:44:59.248774    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0612 13:44:59.249788    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0612 13:44:59.249788    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0612 13:44:59.249788    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0612 13:44:59.249788    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0612 13:44:59.249788    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0612 13:44:59.249788    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0612 13:44:59.250780    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem (1338 bytes)
	W0612 13:44:59.250780    7444 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280_empty.pem, impossibly tiny 0 bytes
	I0612 13:44:59.250780    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0612 13:44:59.251779    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0612 13:44:59.251779    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0612 13:44:59.251779    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0612 13:44:59.251779    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem (1708 bytes)
	I0612 13:44:59.252783    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0612 13:44:59.252783    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem -> /usr/share/ca-certificates/1280.pem
	I0612 13:44:59.252783    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> /usr/share/ca-certificates/12802.pem
	I0612 13:44:59.252783    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:45:01.413194    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:45:01.413698    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:45:01.413763    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:45:04.024837    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:45:04.024944    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:45:04.025121    7444 sshutil.go:53] new ssh client: &{IP:172.23.203.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\id_rsa Username:docker}
	I0612 13:45:04.124504    7444 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0612 13:45:04.133038    7444 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0612 13:45:04.166281    7444 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0612 13:45:04.174802    7444 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0612 13:45:04.209286    7444 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0612 13:45:04.217132    7444 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0612 13:45:04.251940    7444 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0612 13:45:04.258557    7444 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0612 13:45:04.291966    7444 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0612 13:45:04.300233    7444 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0612 13:45:04.334996    7444 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0612 13:45:04.345497    7444 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0612 13:45:04.365672    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 13:45:04.415413    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 13:45:04.475608    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 13:45:04.526997    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0612 13:45:04.574840    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0612 13:45:04.621602    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0612 13:45:04.668465    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 13:45:04.716131    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 13:45:04.764518    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 13:45:04.814660    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem --> /usr/share/ca-certificates/1280.pem (1338 bytes)
	I0612 13:45:04.867959    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem --> /usr/share/ca-certificates/12802.pem (1708 bytes)
	I0612 13:45:04.917870    7444 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0612 13:45:04.953989    7444 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0612 13:45:04.985129    7444 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0612 13:45:05.018721    7444 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0612 13:45:05.056744    7444 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0612 13:45:05.089655    7444 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0612 13:45:05.122286    7444 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0612 13:45:05.169175    7444 ssh_runner.go:195] Run: openssl version
	I0612 13:45:05.189686    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12802.pem && ln -fs /usr/share/ca-certificates/12802.pem /etc/ssl/certs/12802.pem"
	I0612 13:45:05.222188    7444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12802.pem
	I0612 13:45:05.229721    7444 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:15 /usr/share/ca-certificates/12802.pem
	I0612 13:45:05.241233    7444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12802.pem
	I0612 13:45:05.263860    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12802.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 13:45:05.300596    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 13:45:05.339228    7444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 13:45:05.348291    7444 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:00 /usr/share/ca-certificates/minikubeCA.pem
	I0612 13:45:05.360064    7444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 13:45:05.380455    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 13:45:05.416609    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1280.pem && ln -fs /usr/share/ca-certificates/1280.pem /etc/ssl/certs/1280.pem"
	I0612 13:45:05.447603    7444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1280.pem
	I0612 13:45:05.455115    7444 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:15 /usr/share/ca-certificates/1280.pem
	I0612 13:45:05.466956    7444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1280.pem
	I0612 13:45:05.487535    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1280.pem /etc/ssl/certs/51391683.0"
	I0612 13:45:05.521678    7444 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 13:45:05.530161    7444 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0612 13:45:05.530559    7444 kubeadm.go:928] updating node {m03 172.23.207.166 8443 v1.30.1 docker true true} ...
	I0612 13:45:05.530779    7444 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-957600-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.207.166
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-957600 Namespace:default APIServerHAVIP:172.23.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 13:45:05.530852    7444 kube-vip.go:115] generating kube-vip config ...
	I0612 13:45:05.542993    7444 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0612 13:45:05.568545    7444 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0612 13:45:05.568636    7444 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.23.207.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0612 13:45:05.580451    7444 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 13:45:05.600316    7444 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0612 13:45:05.611447    7444 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0612 13:45:05.631049    7444 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0612 13:45:05.631049    7444 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0612 13:45:05.631049    7444 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0612 13:45:05.631686    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0612 13:45:05.631900    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0612 13:45:05.649865    7444 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0612 13:45:05.649865    7444 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0612 13:45:05.650879    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 13:45:05.656675    7444 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0612 13:45:05.656938    7444 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0612 13:45:05.657024    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0612 13:45:05.657024    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0612 13:45:05.710249    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0612 13:45:05.726439    7444 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0612 13:45:05.812861    7444 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0612 13:45:05.812861    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0612 13:45:06.986918    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0612 13:45:07.006251    7444 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0612 13:45:07.038927    7444 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 13:45:07.071682    7444 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0612 13:45:07.120495    7444 ssh_runner.go:195] Run: grep 172.23.207.254	control-plane.minikube.internal$ /etc/hosts
	I0612 13:45:07.128339    7444 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.207.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 13:45:07.162394    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:45:07.374619    7444 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 13:45:07.403267    7444 host.go:66] Checking if "ha-957600" exists ...
	I0612 13:45:07.404276    7444 start.go:316] joinCluster: &{Name:ha-957600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-957600 Namespace:default APIServerHAVIP:172.23.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.203.104 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.201.185 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.23.207.166 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 13:45:07.404276    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0612 13:45:07.404276    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:45:09.602485    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:45:09.602485    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:45:09.603060    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:45:12.254466    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:45:12.254466    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:45:12.254701    7444 sshutil.go:53] new ssh client: &{IP:172.23.203.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\id_rsa Username:docker}
	I0612 13:45:12.498007    7444 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0937159s)
	I0612 13:45:12.498179    7444 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.23.207.166 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0612 13:45:12.498275    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cl2ns1.38fwkh9or36p9019 --discovery-token-ca-cert-hash sha256:10c04e0412ada9d72a46398cbb6ecb6de5efcad2d747fb615b7e984406c55dc5 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-957600-m03 --control-plane --apiserver-advertise-address=172.23.207.166 --apiserver-bind-port=8443"
	I0612 13:45:57.585999    7444 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cl2ns1.38fwkh9or36p9019 --discovery-token-ca-cert-hash sha256:10c04e0412ada9d72a46398cbb6ecb6de5efcad2d747fb615b7e984406c55dc5 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-957600-m03 --control-plane --apiserver-advertise-address=172.23.207.166 --apiserver-bind-port=8443": (45.0875391s)
	I0612 13:45:57.586543    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0612 13:45:58.319522    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-957600-m03 minikube.k8s.io/updated_at=2024_06_12T13_45_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97 minikube.k8s.io/name=ha-957600 minikube.k8s.io/primary=false
	I0612 13:45:58.513740    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-957600-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0612 13:45:58.690827    7444 start.go:318] duration metric: took 51.2863972s to joinCluster
	I0612 13:45:58.690827    7444 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.23.207.166 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0612 13:45:58.694138    7444 out.go:177] * Verifying Kubernetes components...
	I0612 13:45:58.691924    7444 config.go:182] Loaded profile config "ha-957600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 13:45:58.712882    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:45:59.128830    7444 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 13:45:59.165218    7444 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 13:45:59.166327    7444 kapi.go:59] client config for ha-957600: &rest.Config{Host:"https://172.23.207.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-957600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-957600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x288e1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0612 13:45:59.166524    7444 kubeadm.go:477] Overriding stale ClientConfig host https://172.23.207.254:8443 with https://172.23.203.104:8443
	I0612 13:45:59.167439    7444 node_ready.go:35] waiting up to 6m0s for node "ha-957600-m03" to be "Ready" ...
	I0612 13:45:59.167559    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:45:59.167633    7444 round_trippers.go:469] Request Headers:
	I0612 13:45:59.167633    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:45:59.167706    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:45:59.184001    7444 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0612 13:45:59.678184    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:45:59.678184    7444 round_trippers.go:469] Request Headers:
	I0612 13:45:59.678184    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:45:59.678184    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:45:59.682764    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:00.183622    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:00.183622    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:00.183622    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:00.183622    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:00.189219    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:46:00.675130    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:00.675208    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:00.675208    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:00.675208    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:00.679692    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:01.170938    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:01.170938    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:01.170938    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:01.170938    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:01.178425    7444 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 13:46:01.179603    7444 node_ready.go:53] node "ha-957600-m03" has status "Ready":"False"
	I0612 13:46:01.677952    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:01.677952    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:01.677952    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:01.677952    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:01.683119    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:46:02.169729    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:02.169859    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:02.169859    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:02.169859    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:02.177857    7444 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 13:46:02.678909    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:02.678909    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:02.678909    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:02.678909    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:02.683908    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:03.171709    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:03.171776    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:03.171776    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:03.171776    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:03.178388    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:46:03.675746    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:03.675815    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:03.675815    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:03.675815    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:03.681355    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:46:03.682523    7444 node_ready.go:53] node "ha-957600-m03" has status "Ready":"False"
	I0612 13:46:04.168582    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:04.168648    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:04.168746    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:04.168746    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:04.173601    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:04.672923    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:04.673021    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:04.673021    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:04.673021    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:04.682829    7444 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0612 13:46:05.180712    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:05.180825    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:05.180825    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:05.180825    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:05.266098    7444 round_trippers.go:574] Response Status: 200 OK in 85 milliseconds
	I0612 13:46:05.668954    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:05.669077    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:05.669077    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:05.669077    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:05.673361    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:06.169738    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:06.169738    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:06.169738    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:06.169858    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:06.175965    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:46:06.177093    7444 node_ready.go:53] node "ha-957600-m03" has status "Ready":"False"
	I0612 13:46:06.673056    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:06.673121    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:06.673201    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:06.673201    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:06.678472    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:07.177358    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:07.177436    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:07.177436    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:07.177436    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:07.181951    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:07.679683    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:07.679746    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:07.679746    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:07.679746    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:07.686804    7444 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 13:46:08.180984    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:08.181047    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:08.181047    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:08.181047    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:08.188042    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:46:08.188816    7444 node_ready.go:53] node "ha-957600-m03" has status "Ready":"False"
	I0612 13:46:08.681926    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:08.681926    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:08.681926    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:08.681926    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:08.686470    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:09.170979    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:09.170979    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:09.170979    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:09.171148    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:09.176416    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:46:09.668864    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:09.668864    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:09.668864    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:09.668864    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:09.673570    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:10.170909    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:10.171011    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:10.171011    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:10.171011    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:10.177551    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:46:10.671392    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:10.671503    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:10.671503    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:10.671503    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:10.676811    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:46:10.678386    7444 node_ready.go:53] node "ha-957600-m03" has status "Ready":"False"
	I0612 13:46:11.174535    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:11.174535    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:11.174535    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:11.174535    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:11.181183    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:46:11.678579    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:11.678847    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:11.678847    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:11.678847    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:11.683357    7444 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 13:46:11.683995    7444 node_ready.go:49] node "ha-957600-m03" has status "Ready":"True"
	I0612 13:46:11.683995    7444 node_ready.go:38] duration metric: took 12.5165181s for node "ha-957600-m03" to be "Ready" ...
	I0612 13:46:11.684069    7444 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 13:46:11.684173    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods
	I0612 13:46:11.684173    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:11.684173    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:11.684173    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:11.699875    7444 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0612 13:46:11.711431    7444 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fvjdp" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:11.711431    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fvjdp
	I0612 13:46:11.711431    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:11.711431    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:11.711431    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:11.717200    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:46:11.717899    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:46:11.717899    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:11.717899    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:11.717899    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:11.722191    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:11.723704    7444 pod_ready.go:92] pod "coredns-7db6d8ff4d-fvjdp" in "kube-system" namespace has status "Ready":"True"
	I0612 13:46:11.723704    7444 pod_ready.go:81] duration metric: took 12.273ms for pod "coredns-7db6d8ff4d-fvjdp" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:11.723791    7444 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wv2wz" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:11.723860    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-wv2wz
	I0612 13:46:11.723860    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:11.723968    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:11.723968    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:11.728523    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:11.729467    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:46:11.729467    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:11.729467    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:11.729467    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:11.740076    7444 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0612 13:46:11.740765    7444 pod_ready.go:92] pod "coredns-7db6d8ff4d-wv2wz" in "kube-system" namespace has status "Ready":"True"
	I0612 13:46:11.740872    7444 pod_ready.go:81] duration metric: took 17.0806ms for pod "coredns-7db6d8ff4d-wv2wz" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:11.740872    7444 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-957600" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:11.740872    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-957600
	I0612 13:46:11.740872    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:11.740872    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:11.740872    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:11.749119    7444 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0612 13:46:11.750029    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:46:11.750029    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:11.750029    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:11.750029    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:11.770947    7444 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0612 13:46:11.771749    7444 pod_ready.go:92] pod "etcd-ha-957600" in "kube-system" namespace has status "Ready":"True"
	I0612 13:46:11.771749    7444 pod_ready.go:81] duration metric: took 30.8769ms for pod "etcd-ha-957600" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:11.771749    7444 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-957600-m02" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:11.771749    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-957600-m02
	I0612 13:46:11.771749    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:11.771749    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:11.771749    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:11.776158    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:11.778089    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:46:11.778210    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:11.778210    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:11.778241    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:11.782259    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:11.782306    7444 pod_ready.go:92] pod "etcd-ha-957600-m02" in "kube-system" namespace has status "Ready":"True"
	I0612 13:46:11.782843    7444 pod_ready.go:81] duration metric: took 11.0937ms for pod "etcd-ha-957600-m02" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:11.782913    7444 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-957600-m03" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:11.880796    7444 request.go:629] Waited for 97.6496ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-957600-m03
	I0612 13:46:11.880796    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-957600-m03
	I0612 13:46:11.881016    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:11.881016    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:11.881016    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:11.887211    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:46:12.085984    7444 request.go:629] Waited for 197.7915ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:12.086191    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:12.086191    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:12.086191    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:12.086191    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:12.094251    7444 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0612 13:46:12.094963    7444 pod_ready.go:92] pod "etcd-ha-957600-m03" in "kube-system" namespace has status "Ready":"True"
	I0612 13:46:12.094963    7444 pod_ready.go:81] duration metric: took 312.0488ms for pod "etcd-ha-957600-m03" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:12.094963    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-957600" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:12.290270    7444 request.go:629] Waited for 195.0296ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957600
	I0612 13:46:12.290459    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957600
	I0612 13:46:12.290502    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:12.290502    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:12.290502    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:12.295340    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:12.478960    7444 request.go:629] Waited for 182.0165ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:46:12.479256    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:46:12.479256    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:12.479256    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:12.479256    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:12.484160    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:12.484160    7444 pod_ready.go:92] pod "kube-apiserver-ha-957600" in "kube-system" namespace has status "Ready":"True"
	I0612 13:46:12.485607    7444 pod_ready.go:81] duration metric: took 390.6428ms for pod "kube-apiserver-ha-957600" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:12.485607    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-957600-m02" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:12.680806    7444 request.go:629] Waited for 194.7748ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957600-m02
	I0612 13:46:12.681124    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957600-m02
	I0612 13:46:12.681216    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:12.681216    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:12.681216    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:12.686906    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:46:12.885832    7444 request.go:629] Waited for 197.7334ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:46:12.886319    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:46:12.886417    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:12.886417    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:12.886417    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:12.895316    7444 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0612 13:46:12.896682    7444 pod_ready.go:92] pod "kube-apiserver-ha-957600-m02" in "kube-system" namespace has status "Ready":"True"
	I0612 13:46:12.896682    7444 pod_ready.go:81] duration metric: took 411.0737ms for pod "kube-apiserver-ha-957600-m02" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:12.896758    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-957600-m03" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:13.089726    7444 request.go:629] Waited for 192.9056ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957600-m03
	I0612 13:46:13.090163    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957600-m03
	I0612 13:46:13.090282    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:13.090282    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:13.090282    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:13.095727    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:13.278834    7444 request.go:629] Waited for 182.21ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:13.279172    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:13.279273    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:13.279273    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:13.279318    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:13.284930    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:46:13.285979    7444 pod_ready.go:92] pod "kube-apiserver-ha-957600-m03" in "kube-system" namespace has status "Ready":"True"
	I0612 13:46:13.286178    7444 pod_ready.go:81] duration metric: took 389.4194ms for pod "kube-apiserver-ha-957600-m03" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:13.286350    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-957600" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:13.481234    7444 request.go:629] Waited for 194.6781ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957600
	I0612 13:46:13.481435    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957600
	I0612 13:46:13.481435    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:13.481435    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:13.481504    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:13.487519    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:46:13.682618    7444 request.go:629] Waited for 193.1799ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:46:13.682618    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:46:13.682618    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:13.682618    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:13.682618    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:13.688862    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:46:13.689922    7444 pod_ready.go:92] pod "kube-controller-manager-ha-957600" in "kube-system" namespace has status "Ready":"True"
	I0612 13:46:13.690038    7444 pod_ready.go:81] duration metric: took 403.6866ms for pod "kube-controller-manager-ha-957600" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:13.690038    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-957600-m02" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:13.886594    7444 request.go:629] Waited for 196.3295ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957600-m02
	I0612 13:46:13.886824    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957600-m02
	I0612 13:46:13.886824    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:13.886824    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:13.886824    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:13.893769    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:46:14.090060    7444 request.go:629] Waited for 194.1022ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:46:14.090188    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:46:14.090188    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:14.090188    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:14.090188    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:14.096660    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:46:14.097391    7444 pod_ready.go:92] pod "kube-controller-manager-ha-957600-m02" in "kube-system" namespace has status "Ready":"True"
	I0612 13:46:14.097466    7444 pod_ready.go:81] duration metric: took 407.4266ms for pod "kube-controller-manager-ha-957600-m02" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:14.097466    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-957600-m03" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:14.292644    7444 request.go:629] Waited for 194.9513ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957600-m03
	I0612 13:46:14.292780    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957600-m03
	I0612 13:46:14.292780    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:14.292780    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:14.292863    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:14.297509    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:14.479224    7444 request.go:629] Waited for 180.4095ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:14.479448    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:14.479448    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:14.479511    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:14.479511    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:14.486222    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:46:14.486985    7444 pod_ready.go:92] pod "kube-controller-manager-ha-957600-m03" in "kube-system" namespace has status "Ready":"True"
	I0612 13:46:14.486985    7444 pod_ready.go:81] duration metric: took 389.5183ms for pod "kube-controller-manager-ha-957600-m03" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:14.486985    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9qwpr" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:14.683188    7444 request.go:629] Waited for 195.9858ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9qwpr
	I0612 13:46:14.683332    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9qwpr
	I0612 13:46:14.683383    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:14.683383    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:14.683383    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:14.690996    7444 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 13:46:14.886991    7444 request.go:629] Waited for 194.7114ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:14.887072    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:14.887072    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:14.887072    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:14.887072    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:14.893767    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:46:14.894909    7444 pod_ready.go:92] pod "kube-proxy-9qwpr" in "kube-system" namespace has status "Ready":"True"
	I0612 13:46:14.894909    7444 pod_ready.go:81] duration metric: took 407.9231ms for pod "kube-proxy-9qwpr" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:14.894909    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j29r7" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:15.089446    7444 request.go:629] Waited for 194.0928ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j29r7
	I0612 13:46:15.089748    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j29r7
	I0612 13:46:15.089748    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:15.089748    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:15.089748    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:15.094357    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:15.293855    7444 request.go:629] Waited for 198.6093ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:46:15.293855    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:46:15.294002    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:15.294002    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:15.294002    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:15.300054    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:46:15.301280    7444 pod_ready.go:92] pod "kube-proxy-j29r7" in "kube-system" namespace has status "Ready":"True"
	I0612 13:46:15.301280    7444 pod_ready.go:81] duration metric: took 406.2496ms for pod "kube-proxy-j29r7" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:15.301280    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z94m6" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:15.481124    7444 request.go:629] Waited for 179.5004ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z94m6
	I0612 13:46:15.481258    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z94m6
	I0612 13:46:15.481258    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:15.481258    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:15.481393    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:15.486724    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:46:15.684545    7444 request.go:629] Waited for 196.5477ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:46:15.684545    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:46:15.684545    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:15.684545    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:15.684545    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:15.694885    7444 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0612 13:46:15.695702    7444 pod_ready.go:92] pod "kube-proxy-z94m6" in "kube-system" namespace has status "Ready":"True"
	I0612 13:46:15.695803    7444 pod_ready.go:81] duration metric: took 394.5213ms for pod "kube-proxy-z94m6" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:15.695851    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-957600" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:15.886688    7444 request.go:629] Waited for 190.4819ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957600
	I0612 13:46:15.886943    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957600
	I0612 13:46:15.886943    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:15.886943    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:15.886943    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:15.895660    7444 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0612 13:46:16.088389    7444 request.go:629] Waited for 191.6675ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:46:16.088480    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:46:16.088480    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:16.088578    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:16.088578    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:16.093851    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:46:16.094483    7444 pod_ready.go:92] pod "kube-scheduler-ha-957600" in "kube-system" namespace has status "Ready":"True"
	I0612 13:46:16.094483    7444 pod_ready.go:81] duration metric: took 398.6304ms for pod "kube-scheduler-ha-957600" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:16.094483    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-957600-m02" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:16.291115    7444 request.go:629] Waited for 196.4571ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957600-m02
	I0612 13:46:16.291242    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957600-m02
	I0612 13:46:16.291379    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:16.291450    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:16.291450    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:16.296721    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:46:16.478901    7444 request.go:629] Waited for 180.4988ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:46:16.479375    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:46:16.479375    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:16.479436    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:16.479465    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:16.484429    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:16.486046    7444 pod_ready.go:92] pod "kube-scheduler-ha-957600-m02" in "kube-system" namespace has status "Ready":"True"
	I0612 13:46:16.486117    7444 pod_ready.go:81] duration metric: took 391.6327ms for pod "kube-scheduler-ha-957600-m02" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:16.486117    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-957600-m03" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:16.684926    7444 request.go:629] Waited for 198.4825ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957600-m03
	I0612 13:46:16.685040    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957600-m03
	I0612 13:46:16.685040    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:16.685040    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:16.685040    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:16.690478    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:46:16.888825    7444 request.go:629] Waited for 196.8388ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:16.888913    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:16.888913    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:16.888913    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:16.888913    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:16.898975    7444 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0612 13:46:16.899763    7444 pod_ready.go:92] pod "kube-scheduler-ha-957600-m03" in "kube-system" namespace has status "Ready":"True"
	I0612 13:46:16.899763    7444 pod_ready.go:81] duration metric: took 413.5668ms for pod "kube-scheduler-ha-957600-m03" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:16.899852    7444 pod_ready.go:38] duration metric: took 5.2156785s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 13:46:16.899852    7444 api_server.go:52] waiting for apiserver process to appear ...
	I0612 13:46:16.913160    7444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 13:46:16.943463    7444 api_server.go:72] duration metric: took 18.2525813s to wait for apiserver process to appear ...
	I0612 13:46:16.943548    7444 api_server.go:88] waiting for apiserver healthz status ...
	I0612 13:46:16.943548    7444 api_server.go:253] Checking apiserver healthz at https://172.23.203.104:8443/healthz ...
	I0612 13:46:16.949872    7444 api_server.go:279] https://172.23.203.104:8443/healthz returned 200:
	ok
	I0612 13:46:16.950247    7444 round_trippers.go:463] GET https://172.23.203.104:8443/version
	I0612 13:46:16.950247    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:16.950247    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:16.950247    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:16.951964    7444 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 13:46:16.952386    7444 api_server.go:141] control plane version: v1.30.1
	I0612 13:46:16.952386    7444 api_server.go:131] duration metric: took 8.8384ms to wait for apiserver health ...
	I0612 13:46:16.952386    7444 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 13:46:17.089393    7444 request.go:629] Waited for 136.7393ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods
	I0612 13:46:17.089393    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods
	I0612 13:46:17.089393    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:17.089393    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:17.089393    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:17.101970    7444 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0612 13:46:17.113512    7444 system_pods.go:59] 24 kube-system pods found
	I0612 13:46:17.113512    7444 system_pods.go:61] "coredns-7db6d8ff4d-fvjdp" [6cb59655-8c1c-493a-89ee-b4ae9ceacdbb] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "coredns-7db6d8ff4d-wv2wz" [2c2ce90f-b175-4ea7-a936-878c326f66af] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "etcd-ha-957600" [7cce4e7e-9ea8-48f3-b7f5-dc4c445cfe5d] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "etcd-ha-957600-m02" [fa3c8b8b-4744-4a4f-8025-44485b3a7a5f] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "etcd-ha-957600-m03" [e9fc9fc8-f655-49e6-98aa-5a772b66992d] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "kindnet-54xjp" [cf89e4c7-5d54-48fb-9a94-76364e2f3d3c] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "kindnet-gdk8g" [0eac7aaf-2341-4580-92d1-ea700cf2fa0f] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "kindnet-mwpsf" [c191d87f-04fd-4e6c-b2fe-97e4c4e9db23] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "kube-apiserver-ha-957600" [14343c48-f30d-430c-81e0-24b68835b4fd] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "kube-apiserver-ha-957600-m02" [3ba7d864-6b01-4152-8027-2fe8e0d5d6bb] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "kube-apiserver-ha-957600-m03" [4ad9ac9f-d682-431a-8a91-42e27c853f2b] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "kube-controller-manager-ha-957600" [3cc0e64f-a1d7-4062-b78a-b9de960cf935] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "kube-controller-manager-ha-957600-m02" [fb9dba99-8e76-4c2f-b427-de3fee7d0300] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "kube-controller-manager-ha-957600-m03" [54e543ef-8ef5-43e4-b669-71eba6c9b629] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "kube-proxy-9qwpr" [424d5d60-76b3-47ce-bc8f-75f61fccdd9a] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "kube-proxy-j29r7" [e87fe1ac-6577-44e3-af8f-c28e878fea08] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "kube-proxy-z94m6" [cdd33d94-1a1c-4038-aeda-0c6e1d68e559] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "kube-scheduler-ha-957600" [28ad5883-d593-42a7-952f-0038a7bb25d6] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "kube-scheduler-ha-957600-m02" [d3a27ea9-a208-4278-8a50-332971e8a78c] Running
	I0612 13:46:17.113837    7444 system_pods.go:61] "kube-scheduler-ha-957600-m03" [3288ad97-c220-44a6-bde1-a329e7dab060] Running
	I0612 13:46:17.113941    7444 system_pods.go:61] "kube-vip-ha-957600" [2780187a-2cd6-43da-93bd-73c0dc959228] Running
	I0612 13:46:17.113941    7444 system_pods.go:61] "kube-vip-ha-957600-m02" [0908b051-1096-41ae-b457-36b2162ae907] Running
	I0612 13:46:17.113941    7444 system_pods.go:61] "kube-vip-ha-957600-m03" [30ec686b-5763-4ed8-b4d7-a7eab172d0d8] Running
	I0612 13:46:17.113941    7444 system_pods.go:61] "storage-provisioner" [9a5d025e-c240-4084-a1bd-1db96161d3b3] Running
	I0612 13:46:17.113941    7444 system_pods.go:74] duration metric: took 161.5545ms to wait for pod list to return data ...
	I0612 13:46:17.113941    7444 default_sa.go:34] waiting for default service account to be created ...
	I0612 13:46:17.283083    7444 request.go:629] Waited for 169.1418ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/default/serviceaccounts
	I0612 13:46:17.283083    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/default/serviceaccounts
	I0612 13:46:17.283083    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:17.283083    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:17.283083    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:17.289074    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:46:17.289431    7444 default_sa.go:45] found service account: "default"
	I0612 13:46:17.289431    7444 default_sa.go:55] duration metric: took 175.4889ms for default service account to be created ...
	I0612 13:46:17.289431    7444 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 13:46:17.487128    7444 request.go:629] Waited for 197.5015ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods
	I0612 13:46:17.487267    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods
	I0612 13:46:17.487267    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:17.487267    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:17.487462    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:17.500302    7444 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0612 13:46:17.511078    7444 system_pods.go:86] 24 kube-system pods found
	I0612 13:46:17.511078    7444 system_pods.go:89] "coredns-7db6d8ff4d-fvjdp" [6cb59655-8c1c-493a-89ee-b4ae9ceacdbb] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "coredns-7db6d8ff4d-wv2wz" [2c2ce90f-b175-4ea7-a936-878c326f66af] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "etcd-ha-957600" [7cce4e7e-9ea8-48f3-b7f5-dc4c445cfe5d] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "etcd-ha-957600-m02" [fa3c8b8b-4744-4a4f-8025-44485b3a7a5f] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "etcd-ha-957600-m03" [e9fc9fc8-f655-49e6-98aa-5a772b66992d] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kindnet-54xjp" [cf89e4c7-5d54-48fb-9a94-76364e2f3d3c] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kindnet-gdk8g" [0eac7aaf-2341-4580-92d1-ea700cf2fa0f] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kindnet-mwpsf" [c191d87f-04fd-4e6c-b2fe-97e4c4e9db23] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kube-apiserver-ha-957600" [14343c48-f30d-430c-81e0-24b68835b4fd] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kube-apiserver-ha-957600-m02" [3ba7d864-6b01-4152-8027-2fe8e0d5d6bb] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kube-apiserver-ha-957600-m03" [4ad9ac9f-d682-431a-8a91-42e27c853f2b] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kube-controller-manager-ha-957600" [3cc0e64f-a1d7-4062-b78a-b9de960cf935] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kube-controller-manager-ha-957600-m02" [fb9dba99-8e76-4c2f-b427-de3fee7d0300] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kube-controller-manager-ha-957600-m03" [54e543ef-8ef5-43e4-b669-71eba6c9b629] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kube-proxy-9qwpr" [424d5d60-76b3-47ce-bc8f-75f61fccdd9a] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kube-proxy-j29r7" [e87fe1ac-6577-44e3-af8f-c28e878fea08] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kube-proxy-z94m6" [cdd33d94-1a1c-4038-aeda-0c6e1d68e559] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kube-scheduler-ha-957600" [28ad5883-d593-42a7-952f-0038a7bb25d6] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kube-scheduler-ha-957600-m02" [d3a27ea9-a208-4278-8a50-332971e8a78c] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kube-scheduler-ha-957600-m03" [3288ad97-c220-44a6-bde1-a329e7dab060] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kube-vip-ha-957600" [2780187a-2cd6-43da-93bd-73c0dc959228] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kube-vip-ha-957600-m02" [0908b051-1096-41ae-b457-36b2162ae907] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kube-vip-ha-957600-m03" [30ec686b-5763-4ed8-b4d7-a7eab172d0d8] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "storage-provisioner" [9a5d025e-c240-4084-a1bd-1db96161d3b3] Running
	I0612 13:46:17.511078    7444 system_pods.go:126] duration metric: took 221.6467ms to wait for k8s-apps to be running ...
	I0612 13:46:17.511078    7444 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 13:46:17.524737    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 13:46:17.553797    7444 system_svc.go:56] duration metric: took 42.7192ms WaitForService to wait for kubelet
	I0612 13:46:17.554662    7444 kubeadm.go:576] duration metric: took 18.8637783s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 13:46:17.554662    7444 node_conditions.go:102] verifying NodePressure condition ...
	I0612 13:46:17.690117    7444 request.go:629] Waited for 134.9103ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes
	I0612 13:46:17.690117    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes
	I0612 13:46:17.690117    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:17.690117    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:17.690117    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:17.695714    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:46:17.697422    7444 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 13:46:17.697505    7444 node_conditions.go:123] node cpu capacity is 2
	I0612 13:46:17.697505    7444 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 13:46:17.697505    7444 node_conditions.go:123] node cpu capacity is 2
	I0612 13:46:17.697505    7444 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 13:46:17.697505    7444 node_conditions.go:123] node cpu capacity is 2
	I0612 13:46:17.697505    7444 node_conditions.go:105] duration metric: took 142.8429ms to run NodePressure ...
	I0612 13:46:17.697505    7444 start.go:240] waiting for startup goroutines ...
	I0612 13:46:17.697593    7444 start.go:254] writing updated cluster config ...
	I0612 13:46:17.710529    7444 ssh_runner.go:195] Run: rm -f paused
	I0612 13:46:17.853956    7444 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0612 13:46:17.858608    7444 out.go:177] * Done! kubectl is now configured to use "ha-957600" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jun 12 20:38:29 ha-957600 dockerd[1320]: time="2024-06-12T20:38:29.273961261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:38:29 ha-957600 dockerd[1320]: time="2024-06-12T20:38:29.328931910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 12 20:38:29 ha-957600 dockerd[1320]: time="2024-06-12T20:38:29.329131410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 12 20:38:29 ha-957600 dockerd[1320]: time="2024-06-12T20:38:29.329406710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:38:29 ha-957600 dockerd[1320]: time="2024-06-12T20:38:29.329772810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:38:29 ha-957600 cri-dockerd[1222]: time="2024-06-12T20:38:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3989d97bb5bda163f1208a3d3ee259dc20986f91707fdaec72fbfd6f332c3a6a/resolv.conf as [nameserver 172.23.192.1]"
	Jun 12 20:38:29 ha-957600 cri-dockerd[1222]: time="2024-06-12T20:38:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/49b8332111b26e8a790f08afa32dac04688488303f4dcb0d529686fe5ef51560/resolv.conf as [nameserver 172.23.192.1]"
	Jun 12 20:38:29 ha-957600 dockerd[1320]: time="2024-06-12T20:38:29.897486456Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 12 20:38:29 ha-957600 dockerd[1320]: time="2024-06-12T20:38:29.897933957Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 12 20:38:29 ha-957600 dockerd[1320]: time="2024-06-12T20:38:29.898047758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:38:29 ha-957600 dockerd[1320]: time="2024-06-12T20:38:29.898630360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:38:29 ha-957600 dockerd[1320]: time="2024-06-12T20:38:29.929102167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 12 20:38:29 ha-957600 dockerd[1320]: time="2024-06-12T20:38:29.929248068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 12 20:38:29 ha-957600 dockerd[1320]: time="2024-06-12T20:38:29.929267668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:38:29 ha-957600 dockerd[1320]: time="2024-06-12T20:38:29.929372468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:46:56 ha-957600 dockerd[1320]: time="2024-06-12T20:46:56.559269747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 12 20:46:56 ha-957600 dockerd[1320]: time="2024-06-12T20:46:56.559437648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 12 20:46:56 ha-957600 dockerd[1320]: time="2024-06-12T20:46:56.559462348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:46:56 ha-957600 dockerd[1320]: time="2024-06-12T20:46:56.560499052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:46:56 ha-957600 cri-dockerd[1222]: time="2024-06-12T20:46:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2705e1162b2dfa56928107ee31e11cffe2a28d10a5ef252a20ac33fd3cd1e2c0/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 12 20:46:58 ha-957600 cri-dockerd[1222]: time="2024-06-12T20:46:58Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jun 12 20:46:58 ha-957600 dockerd[1320]: time="2024-06-12T20:46:58.730677826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 12 20:46:58 ha-957600 dockerd[1320]: time="2024-06-12T20:46:58.731144528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 12 20:46:58 ha-957600 dockerd[1320]: time="2024-06-12T20:46:58.731241729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:46:58 ha-957600 dockerd[1320]: time="2024-06-12T20:46:58.731977932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	84e2387ee8a13       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   2705e1162b2df       busybox-fc5497c4f-q7zbt
	ec42c746f91c3       cbb01a7bd410d                                                                                         9 minutes ago        Running             coredns                   0                   49b8332111b26       coredns-7db6d8ff4d-wv2wz
	c8abc35b31bc6       cbb01a7bd410d                                                                                         9 minutes ago        Running             coredns                   0                   3989d97bb5bda       coredns-7db6d8ff4d-fvjdp
	f3fb45713a32c       6e38f40d628db                                                                                         9 minutes ago        Running             storage-provisioner       0                   8c96efd997764       storage-provisioner
	6d98838ddf5ec       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              9 minutes ago        Running             kindnet-cni               0                   86884c3e05d62       kindnet-gdk8g
	acce2e5331821       747097150317f                                                                                         9 minutes ago        Running             kube-proxy                0                   935f2939503f5       kube-proxy-z94m6
	12d6ecaecdbef       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     10 minutes ago       Running             kube-vip                  0                   1346e47f9a054       kube-vip-ha-957600
	89d14f1e8c68d       a52dc94f0a912                                                                                         10 minutes ago       Running             kube-scheduler            0                   1773fa0ee02c8       kube-scheduler-ha-957600
	cf6a5b6c15824       3861cfcd7c04c                                                                                         10 minutes ago       Running             etcd                      0                   b2a101629276d       etcd-ha-957600
	488300684bb24       91be940803172                                                                                         10 minutes ago       Running             kube-apiserver            0                   91bc2d6c42e45       kube-apiserver-ha-957600
	f85741f3c269e       25a1387cdab82                                                                                         10 minutes ago       Running             kube-controller-manager   0                   38266d831298c       kube-controller-manager-ha-957600
	
	
	==> coredns [c8abc35b31bc] <==
	[INFO] 10.244.0.4:42986 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.127821489s
	[INFO] 10.244.0.4:33965 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000166901s
	[INFO] 10.244.0.4:58509 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000362101s
	[INFO] 10.244.0.4:50272 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142601s
	[INFO] 10.244.0.4:33112 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000182601s
	[INFO] 10.244.1.2:47306 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000419002s
	[INFO] 10.244.1.2:59985 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000063s
	[INFO] 10.244.1.2:48089 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000065801s
	[INFO] 10.244.1.2:42781 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.004542721s
	[INFO] 10.244.1.2:60731 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000654s
	[INFO] 10.244.1.2:54446 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001089s
	[INFO] 10.244.1.2:58167 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001481s
	[INFO] 10.244.2.2:52082 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107901s
	[INFO] 10.244.2.2:55279 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177001s
	[INFO] 10.244.2.2:57294 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070401s
	[INFO] 10.244.0.4:33423 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135201s
	[INFO] 10.244.0.4:41826 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001169s
	[INFO] 10.244.0.4:46427 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001253s
	[INFO] 10.244.0.4:38094 - 5 "PTR IN 1.192.23.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000251701s
	[INFO] 10.244.1.2:55510 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123s
	[INFO] 10.244.1.2:37225 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000615s
	[INFO] 10.244.1.2:53395 - 5 "PTR IN 1.192.23.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000059201s
	[INFO] 10.244.2.2:35852 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148501s
	[INFO] 10.244.2.2:54338 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000582s
	[INFO] 10.244.2.2:41334 - 5 "PTR IN 1.192.23.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000662s
	
	
	==> coredns [ec42c746f91c] <==
	[INFO] 10.244.2.2:54866 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000079801s
	[INFO] 10.244.2.2:41940 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.093995646s
	[INFO] 10.244.0.4:43707 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221101s
	[INFO] 10.244.0.4:43167 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000204101s
	[INFO] 10.244.0.4:39327 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012035855s
	[INFO] 10.244.1.2:49888 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000197201s
	[INFO] 10.244.1.2:45990 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000215501s
	[INFO] 10.244.1.2:59692 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000338902s
	[INFO] 10.244.2.2:33811 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000207001s
	[INFO] 10.244.2.2:38677 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.038436877s
	[INFO] 10.244.2.2:48262 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000737s
	[INFO] 10.244.2.2:46710 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000183401s
	[INFO] 10.244.2.2:57557 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000167801s
	[INFO] 10.244.2.2:43514 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000152001s
	[INFO] 10.244.2.2:48911 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000233301s
	[INFO] 10.244.2.2:35403 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000651s
	[INFO] 10.244.0.4:33256 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000196101s
	[INFO] 10.244.0.4:42388 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000254601s
	[INFO] 10.244.0.4:33200 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156101s
	[INFO] 10.244.0.4:57990 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000558s
	[INFO] 10.244.1.2:56220 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084501s
	[INFO] 10.244.1.2:37649 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001667s
	[INFO] 10.244.2.2:59667 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000825s
	[INFO] 10.244.1.2:40342 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000300902s
	[INFO] 10.244.2.2:35837 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000256302s
	
	
	==> describe nodes <==
	Name:               ha-957600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-957600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97
	                    minikube.k8s.io/name=ha-957600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_12T13_38_05_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 20:38:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-957600
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 20:47:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 20:47:06 +0000   Wed, 12 Jun 2024 20:38:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 20:47:06 +0000   Wed, 12 Jun 2024 20:38:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 20:47:06 +0000   Wed, 12 Jun 2024 20:38:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 20:47:06 +0000   Wed, 12 Jun 2024 20:38:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.203.104
	  Hostname:    ha-957600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 e64d934c319b441bbd9685ce84fc68cf
	  System UUID:                fdad1bc4-ac9b-c541-b232-922aa0850b6e
	  Boot ID:                    1c97c559-dc70-4810-9425-0df71a26d678
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-q7zbt              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 coredns-7db6d8ff4d-fvjdp             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m44s
	  kube-system                 coredns-7db6d8ff4d-wv2wz             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m44s
	  kube-system                 etcd-ha-957600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m57s
	  kube-system                 kindnet-gdk8g                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m44s
	  kube-system                 kube-apiserver-ha-957600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m59s
	  kube-system                 kube-controller-manager-ha-957600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m57s
	  kube-system                 kube-proxy-z94m6                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m44s
	  kube-system                 kube-scheduler-ha-957600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m57s
	  kube-system                 kube-vip-ha-957600                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m57s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m43s  kube-proxy       
	  Normal  Starting                 9m57s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m57s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m57s  kubelet          Node ha-957600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m57s  kubelet          Node ha-957600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m57s  kubelet          Node ha-957600 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m45s  node-controller  Node ha-957600 event: Registered Node ha-957600 in Controller
	  Normal  NodeReady                9m34s  kubelet          Node ha-957600 status is now: NodeReady
	  Normal  RegisteredNode           5m42s  node-controller  Node ha-957600 event: Registered Node ha-957600 in Controller
	  Normal  RegisteredNode           109s   node-controller  Node ha-957600 event: Registered Node ha-957600 in Controller
	
	
	Name:               ha-957600-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-957600-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97
	                    minikube.k8s.io/name=ha-957600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_12T13_42_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 20:41:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-957600-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 20:47:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 20:47:06 +0000   Wed, 12 Jun 2024 20:41:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 20:47:06 +0000   Wed, 12 Jun 2024 20:41:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 20:47:06 +0000   Wed, 12 Jun 2024 20:41:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 20:47:06 +0000   Wed, 12 Jun 2024 20:42:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.201.185
	  Hostname:    ha-957600-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 af67b7ed4f1d4a4cab4cfeaf81c8f5c6
	  System UUID:                4178c1bc-b702-0c4d-a862-c03e19bffe95
	  Boot ID:                    36912f38-1473-4e04-b9e9-e6a5f42c71db
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qhrx6                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 etcd-ha-957600-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m59s
	  kube-system                 kindnet-54xjp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m2s
	  kube-system                 kube-apiserver-ha-957600-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m
	  kube-system                 kube-controller-manager-ha-957600-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m59s
	  kube-system                 kube-proxy-j29r7                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  kube-system                 kube-scheduler-ha-957600-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m59s
	  kube-system                 kube-vip-ha-957600-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m56s                kube-proxy       
	  Normal  NodeAllocatableEnforced  6m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m2s (x8 over 6m3s)  kubelet          Node ha-957600-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m2s (x8 over 6m3s)  kubelet          Node ha-957600-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m2s (x7 over 6m3s)  kubelet          Node ha-957600-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m                   node-controller  Node ha-957600-m02 event: Registered Node ha-957600-m02 in Controller
	  Normal  RegisteredNode           5m42s                node-controller  Node ha-957600-m02 event: Registered Node ha-957600-m02 in Controller
	  Normal  RegisteredNode           109s                 node-controller  Node ha-957600-m02 event: Registered Node ha-957600-m02 in Controller
	
	
	Name:               ha-957600-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-957600-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97
	                    minikube.k8s.io/name=ha-957600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_12T13_45_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 20:45:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-957600-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 20:47:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 20:47:23 +0000   Wed, 12 Jun 2024 20:45:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 20:47:23 +0000   Wed, 12 Jun 2024 20:45:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 20:47:23 +0000   Wed, 12 Jun 2024 20:45:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 20:47:23 +0000   Wed, 12 Jun 2024 20:46:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.207.166
	  Hostname:    ha-957600-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 f0e098564aa445b2b5848cea090e8e15
	  System UUID:                09d9a757-7f66-5d4a-a594-4d8a5f785e73
	  Boot ID:                    0d48cb69-b4e8-4aab-974b-a031614083df
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-sfrgv                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 etcd-ha-957600-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m7s
	  kube-system                 kindnet-mwpsf                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m10s
	  kube-system                 kube-apiserver-ha-957600-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m7s
	  kube-system                 kube-controller-manager-ha-957600-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m7s
	  kube-system                 kube-proxy-9qwpr                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m10s
	  kube-system                 kube-scheduler-ha-957600-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m9s
	  kube-system                 kube-vip-ha-957600-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m5s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  2m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m10s                  node-controller  Node ha-957600-m03 event: Registered Node ha-957600-m03 in Controller
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m11s)  kubelet          Node ha-957600-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m11s)  kubelet          Node ha-957600-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x7 over 2m11s)  kubelet          Node ha-957600-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m7s                   node-controller  Node ha-957600-m03 event: Registered Node ha-957600-m03 in Controller
	  Normal  RegisteredNode           109s                   node-controller  Node ha-957600-m03 event: Registered Node ha-957600-m03 in Controller
	
	
	==> dmesg <==
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +48.322498] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.166864] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[Jun12 20:37] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +0.105818] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.581524] systemd-fstab-generator[977]: Ignoring "noauto" option for root device
	[  +0.213203] systemd-fstab-generator[990]: Ignoring "noauto" option for root device
	[  +0.242095] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[  +2.828884] systemd-fstab-generator[1176]: Ignoring "noauto" option for root device
	[  +0.193507] systemd-fstab-generator[1187]: Ignoring "noauto" option for root device
	[  +0.220915] systemd-fstab-generator[1199]: Ignoring "noauto" option for root device
	[  +0.260611] systemd-fstab-generator[1215]: Ignoring "noauto" option for root device
	[ +11.242976] systemd-fstab-generator[1306]: Ignoring "noauto" option for root device
	[  +0.108187] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.444115] systemd-fstab-generator[1512]: Ignoring "noauto" option for root device
	[  +6.936173] systemd-fstab-generator[1714]: Ignoring "noauto" option for root device
	[  +0.112377] kauditd_printk_skb: 73 callbacks suppressed
	[  +5.878577] kauditd_printk_skb: 67 callbacks suppressed
	[Jun12 20:38] systemd-fstab-generator[2207]: Ignoring "noauto" option for root device
	[ +13.771844] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.634506] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.398361] kauditd_printk_skb: 19 callbacks suppressed
	[Jun12 20:40] hrtimer: interrupt took 2150712 ns
	[Jun12 20:42] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [cf6a5b6c1582] <==
	{"level":"info","ts":"2024-06-12T20:45:54.598005Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"4a5651cf73b784a6"}
	{"level":"info","ts":"2024-06-12T20:45:54.60901Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"f62ce7ee06bc2052","remote-peer-id":"4a5651cf73b784a6"}
	{"level":"info","ts":"2024-06-12T20:45:54.616374Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"f62ce7ee06bc2052","remote-peer-id":"4a5651cf73b784a6"}
	{"level":"info","ts":"2024-06-12T20:45:54.756794Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"f62ce7ee06bc2052","to":"4a5651cf73b784a6","stream-type":"stream Message"}
	{"level":"info","ts":"2024-06-12T20:45:54.756981Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"f62ce7ee06bc2052","remote-peer-id":"4a5651cf73b784a6"}
	{"level":"info","ts":"2024-06-12T20:45:54.852235Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"f62ce7ee06bc2052","to":"4a5651cf73b784a6","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-06-12T20:45:54.852276Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"f62ce7ee06bc2052","remote-peer-id":"4a5651cf73b784a6"}
	{"level":"warn","ts":"2024-06-12T20:45:55.114972Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.330032ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:436"}
	{"level":"info","ts":"2024-06-12T20:45:55.115084Z","caller":"traceutil/trace.go:171","msg":"trace[1220649610] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:1501; }","duration":"118.507132ms","start":"2024-06-12T20:45:54.996535Z","end":"2024-06-12T20:45:55.115042Z","steps":["trace[1220649610] 'agreement among raft nodes before linearized reading'  (duration: 66.775844ms)","trace[1220649610] 'range keys from in-memory index tree'  (duration: 51.520188ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-12T20:45:55.116125Z","caller":"traceutil/trace.go:171","msg":"trace[1495983850] transaction","detail":"{read_only:false; response_revision:1502; number_of_response:1; }","duration":"195.773314ms","start":"2024-06-12T20:45:54.920324Z","end":"2024-06-12T20:45:55.116098Z","steps":["trace[1495983850] 'process raft request'  (duration: 137.980703ms)","trace[1495983850] 'compare'  (duration: 57.707611ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-12T20:45:55.435029Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"4a5651cf73b784a6","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-06-12T20:45:56.434145Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"4a5651cf73b784a6","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-06-12T20:45:57.442664Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f62ce7ee06bc2052 switched to configuration voters=(5356558758245270694 6961642458556745113 17738808041806766162)"}
	{"level":"info","ts":"2024-06-12T20:45:57.442791Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"9c9eed33717cafba","local-member-id":"f62ce7ee06bc2052"}
	{"level":"info","ts":"2024-06-12T20:45:57.442821Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"f62ce7ee06bc2052","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"4a5651cf73b784a6"}
	{"level":"info","ts":"2024-06-12T20:46:05.268825Z","caller":"traceutil/trace.go:171","msg":"trace[638143896] transaction","detail":"{read_only:false; response_revision:1547; number_of_response:1; }","duration":"142.560218ms","start":"2024-06-12T20:46:05.126246Z","end":"2024-06-12T20:46:05.268806Z","steps":["trace[638143896] 'process raft request'  (duration: 142.228517ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T20:46:55.784863Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.930994ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/busybox-fc5497c4f-sfrgv\" ","response":"range_response_count:1 size:2951"}
	{"level":"info","ts":"2024-06-12T20:46:55.785239Z","caller":"traceutil/trace.go:171","msg":"trace[1641696766] range","detail":"{range_begin:/registry/pods/default/busybox-fc5497c4f-sfrgv; range_end:; response_count:1; response_revision:1725; }","duration":"110.331795ms","start":"2024-06-12T20:46:55.674888Z","end":"2024-06-12T20:46:55.78522Z","steps":["trace[1641696766] 'agreement among raft nodes before linearized reading'  (duration: 85.368506ms)","trace[1641696766] 'range keys from in-memory index tree'  (duration: 24.497887ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-12T20:46:55.786027Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.060897ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/busybox-fc5497c4f-qhrx6\" ","response":"range_response_count:1 size:3273"}
	{"level":"info","ts":"2024-06-12T20:46:55.78667Z","caller":"traceutil/trace.go:171","msg":"trace[1291374510] range","detail":"{range_begin:/registry/pods/default/busybox-fc5497c4f-qhrx6; range_end:; response_count:1; response_revision:1725; }","duration":"111.708999ms","start":"2024-06-12T20:46:55.674945Z","end":"2024-06-12T20:46:55.786654Z","steps":["trace[1291374510] 'agreement among raft nodes before linearized reading'  (duration: 85.496205ms)","trace[1291374510] 'range keys from in-memory index tree'  (duration: 25.518592ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-12T20:46:55.786262Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.312698ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/busybox-fc5497c4f-q7zbt\" ","response":"range_response_count:1 size:2947"}
	{"level":"info","ts":"2024-06-12T20:46:55.787264Z","caller":"traceutil/trace.go:171","msg":"trace[10547874] range","detail":"{range_begin:/registry/pods/default/busybox-fc5497c4f-q7zbt; range_end:; response_count:1; response_revision:1725; }","duration":"112.322701ms","start":"2024-06-12T20:46:55.674928Z","end":"2024-06-12T20:46:55.787251Z","steps":["trace[10547874] 'agreement among raft nodes before linearized reading'  (duration: 85.535406ms)","trace[10547874] 'range keys from in-memory index tree'  (duration: 25.743692ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-12T20:47:57.758863Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1039}
	{"level":"info","ts":"2024-06-12T20:47:57.875979Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1039,"took":"116.327423ms","hash":3497752263,"current-db-size-bytes":3489792,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":2019328,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-06-12T20:47:57.87605Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3497752263,"revision":1039,"compact-revision":-1}
	
	
	==> kernel <==
	 20:48:01 up 12 min,  0 users,  load average: 0.76, 0.46, 0.28
	Linux ha-957600 5.10.207 #1 SMP Tue Jun 11 00:16:05 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6d98838ddf5e] <==
	I0612 20:47:16.130496       1 main.go:250] Node ha-957600-m03 has CIDR [10.244.2.0/24] 
	I0612 20:47:26.144935       1 main.go:223] Handling node with IPs: map[172.23.203.104:{}]
	I0612 20:47:26.144981       1 main.go:227] handling current node
	I0612 20:47:26.144996       1 main.go:223] Handling node with IPs: map[172.23.201.185:{}]
	I0612 20:47:26.145003       1 main.go:250] Node ha-957600-m02 has CIDR [10.244.1.0/24] 
	I0612 20:47:26.145346       1 main.go:223] Handling node with IPs: map[172.23.207.166:{}]
	I0612 20:47:26.145487       1 main.go:250] Node ha-957600-m03 has CIDR [10.244.2.0/24] 
	I0612 20:47:36.157958       1 main.go:223] Handling node with IPs: map[172.23.203.104:{}]
	I0612 20:47:36.158004       1 main.go:227] handling current node
	I0612 20:47:36.158016       1 main.go:223] Handling node with IPs: map[172.23.201.185:{}]
	I0612 20:47:36.158022       1 main.go:250] Node ha-957600-m02 has CIDR [10.244.1.0/24] 
	I0612 20:47:36.158388       1 main.go:223] Handling node with IPs: map[172.23.207.166:{}]
	I0612 20:47:36.158418       1 main.go:250] Node ha-957600-m03 has CIDR [10.244.2.0/24] 
	I0612 20:47:46.175157       1 main.go:223] Handling node with IPs: map[172.23.203.104:{}]
	I0612 20:47:46.176110       1 main.go:227] handling current node
	I0612 20:47:46.176136       1 main.go:223] Handling node with IPs: map[172.23.201.185:{}]
	I0612 20:47:46.176146       1 main.go:250] Node ha-957600-m02 has CIDR [10.244.1.0/24] 
	I0612 20:47:46.176411       1 main.go:223] Handling node with IPs: map[172.23.207.166:{}]
	I0612 20:47:46.176512       1 main.go:250] Node ha-957600-m03 has CIDR [10.244.2.0/24] 
	I0612 20:47:56.191251       1 main.go:223] Handling node with IPs: map[172.23.203.104:{}]
	I0612 20:47:56.191351       1 main.go:227] handling current node
	I0612 20:47:56.191379       1 main.go:223] Handling node with IPs: map[172.23.201.185:{}]
	I0612 20:47:56.191386       1 main.go:250] Node ha-957600-m02 has CIDR [10.244.1.0/24] 
	I0612 20:47:56.191903       1 main.go:223] Handling node with IPs: map[172.23.207.166:{}]
	I0612 20:47:56.191921       1 main.go:250] Node ha-957600-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [488300684bb2] <==
	I0612 20:38:04.471406       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0612 20:38:04.535397       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0612 20:38:04.563217       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0612 20:38:17.171073       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0612 20:38:17.279149       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0612 20:45:51.827352       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 12.4µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0612 20:45:51.833936       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0612 20:45:51.833976       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0612 20:45:51.840471       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0612 20:45:51.841089       1 timeout.go:142] post-timeout activity - time-elapsed: 22.609883ms, PATCH "/api/v1/namespaces/default/events/ha-957600-m03.17d85cabd2a3635e" result: <nil>
	E0612 20:47:02.410874       1 conn.go:339] Error on socket receive: read tcp 172.23.207.254:8443->172.23.192.1:59693: use of closed network connection
	E0612 20:47:02.927088       1 conn.go:339] Error on socket receive: read tcp 172.23.207.254:8443->172.23.192.1:59695: use of closed network connection
	E0612 20:47:04.507868       1 conn.go:339] Error on socket receive: read tcp 172.23.207.254:8443->172.23.192.1:59697: use of closed network connection
	E0612 20:47:05.139717       1 conn.go:339] Error on socket receive: read tcp 172.23.207.254:8443->172.23.192.1:59699: use of closed network connection
	E0612 20:47:05.594031       1 conn.go:339] Error on socket receive: read tcp 172.23.207.254:8443->172.23.192.1:59701: use of closed network connection
	E0612 20:47:06.083903       1 conn.go:339] Error on socket receive: read tcp 172.23.207.254:8443->172.23.192.1:59703: use of closed network connection
	E0612 20:47:06.549253       1 conn.go:339] Error on socket receive: read tcp 172.23.207.254:8443->172.23.192.1:59705: use of closed network connection
	E0612 20:47:06.995254       1 conn.go:339] Error on socket receive: read tcp 172.23.207.254:8443->172.23.192.1:59707: use of closed network connection
	E0612 20:47:07.435015       1 conn.go:339] Error on socket receive: read tcp 172.23.207.254:8443->172.23.192.1:59709: use of closed network connection
	E0612 20:47:08.214177       1 conn.go:339] Error on socket receive: read tcp 172.23.207.254:8443->172.23.192.1:59712: use of closed network connection
	E0612 20:47:18.674272       1 conn.go:339] Error on socket receive: read tcp 172.23.207.254:8443->172.23.192.1:59714: use of closed network connection
	E0612 20:47:19.101737       1 conn.go:339] Error on socket receive: read tcp 172.23.207.254:8443->172.23.192.1:59717: use of closed network connection
	E0612 20:47:29.519735       1 conn.go:339] Error on socket receive: read tcp 172.23.207.254:8443->172.23.192.1:59719: use of closed network connection
	E0612 20:47:29.946662       1 conn.go:339] Error on socket receive: read tcp 172.23.207.254:8443->172.23.192.1:59723: use of closed network connection
	E0612 20:47:40.389190       1 conn.go:339] Error on socket receive: read tcp 172.23.207.254:8443->172.23.192.1:59725: use of closed network connection
	
	
	==> kube-controller-manager [f85741f3c269] <==
	I0612 20:38:31.187281       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="22.339017ms"
	I0612 20:38:31.187704       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="95.701µs"
	I0612 20:38:31.726163       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0612 20:41:59.053923       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-957600-m02\" does not exist"
	I0612 20:41:59.098994       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-957600-m02" podCIDRs=["10.244.1.0/24"]
	I0612 20:42:01.768409       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-957600-m02"
	I0612 20:45:50.977625       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-957600-m03\" does not exist"
	I0612 20:45:51.046695       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-957600-m03" podCIDRs=["10.244.2.0/24"]
	I0612 20:45:51.814004       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-957600-m03"
	I0612 20:46:55.513166       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="181.083148ms"
	I0612 20:46:55.560408       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.081868ms"
	I0612 20:46:55.942181       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="381.551064ms"
	I0612 20:46:56.240327       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="296.490861ms"
	I0612 20:46:56.286745       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.059065ms"
	I0612 20:46:56.392521       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="105.631078ms"
	I0612 20:46:56.393057       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="279.201µs"
	I0612 20:46:56.469856       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.53718ms"
	I0612 20:46:56.469962       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.9µs"
	I0612 20:46:58.899855       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.744337ms"
	I0612 20:46:58.956982       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.902564ms"
	I0612 20:46:58.959480       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="991.404µs"
	I0612 20:46:59.008748       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.439385ms"
	I0612 20:46:59.009419       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="301.501µs"
	I0612 20:47:00.009698       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="93.315631ms"
	I0612 20:47:00.010419       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="580.303µs"
	
	
	==> kube-proxy [acce2e533182] <==
	I0612 20:38:18.299043       1 server_linux.go:69] "Using iptables proxy"
	I0612 20:38:18.312357       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.203.104"]
	I0612 20:38:18.380210       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 20:38:18.380639       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 20:38:18.380772       1 server_linux.go:165] "Using iptables Proxier"
	I0612 20:38:18.386825       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 20:38:18.387128       1 server.go:872] "Version info" version="v1.30.1"
	I0612 20:38:18.387274       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 20:38:18.390104       1 config.go:192] "Starting service config controller"
	I0612 20:38:18.390243       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 20:38:18.390298       1 config.go:101] "Starting endpoint slice config controller"
	I0612 20:38:18.390305       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 20:38:18.390855       1 config.go:319] "Starting node config controller"
	I0612 20:38:18.390952       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 20:38:18.491013       1 shared_informer.go:320] Caches are synced for node config
	I0612 20:38:18.491149       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0612 20:38:18.491178       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [89d14f1e8c68] <==
	W0612 20:38:01.563731       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0612 20:38:01.564341       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0612 20:38:01.574696       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0612 20:38:01.575320       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0612 20:38:01.617060       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0612 20:38:01.617197       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0612 20:38:01.635954       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0612 20:38:01.638026       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0612 20:38:01.664260       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0612 20:38:01.664982       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0612 20:38:01.664535       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0612 20:38:01.665801       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0612 20:38:03.362833       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0612 20:46:55.423799       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-qhrx6\": pod busybox-fc5497c4f-qhrx6 is already assigned to node \"ha-957600-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-qhrx6" node="ha-957600-m02"
	E0612 20:46:55.423985       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 438e6a8d-a078-4633-a2e8-5a41e507ad81(default/busybox-fc5497c4f-qhrx6) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-qhrx6"
	E0612 20:46:55.424805       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-qhrx6\": pod busybox-fc5497c4f-qhrx6 is already assigned to node \"ha-957600-m02\"" pod="default/busybox-fc5497c4f-qhrx6"
	I0612 20:46:55.424983       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-qhrx6" node="ha-957600-m02"
	E0612 20:46:55.459641       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-sfrgv\": pod busybox-fc5497c4f-sfrgv is already assigned to node \"ha-957600-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-sfrgv" node="ha-957600-m03"
	E0612 20:46:55.462990       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 65f290c0-f814-4897-98f8-5d944ca8ad36(default/busybox-fc5497c4f-sfrgv) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-sfrgv"
	E0612 20:46:55.463122       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-sfrgv\": pod busybox-fc5497c4f-sfrgv is already assigned to node \"ha-957600-m03\"" pod="default/busybox-fc5497c4f-sfrgv"
	I0612 20:46:55.463161       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-sfrgv" node="ha-957600-m03"
	E0612 20:46:55.481780       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-q7zbt\": pod busybox-fc5497c4f-q7zbt is already assigned to node \"ha-957600\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-q7zbt" node="ha-957600"
	E0612 20:46:55.481825       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 76d70f3c-f134-446f-8649-2f89690c9ae0(default/busybox-fc5497c4f-q7zbt) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-q7zbt"
	E0612 20:46:55.481842       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-q7zbt\": pod busybox-fc5497c4f-q7zbt is already assigned to node \"ha-957600\"" pod="default/busybox-fc5497c4f-q7zbt"
	I0612 20:46:55.481859       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-q7zbt" node="ha-957600"
	
	
	==> kubelet <==
	Jun 12 20:43:04 ha-957600 kubelet[2214]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 20:44:04 ha-957600 kubelet[2214]: E0612 20:44:04.613925    2214 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 20:44:04 ha-957600 kubelet[2214]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 20:44:04 ha-957600 kubelet[2214]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 20:44:04 ha-957600 kubelet[2214]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 20:44:04 ha-957600 kubelet[2214]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 20:45:04 ha-957600 kubelet[2214]: E0612 20:45:04.614798    2214 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 20:45:04 ha-957600 kubelet[2214]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 20:45:04 ha-957600 kubelet[2214]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 20:45:04 ha-957600 kubelet[2214]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 20:45:04 ha-957600 kubelet[2214]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 20:46:04 ha-957600 kubelet[2214]: E0612 20:46:04.613361    2214 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 20:46:04 ha-957600 kubelet[2214]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 20:46:04 ha-957600 kubelet[2214]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 20:46:04 ha-957600 kubelet[2214]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 20:46:04 ha-957600 kubelet[2214]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 20:46:55 ha-957600 kubelet[2214]: I0612 20:46:55.466139    2214 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wv2wz" podStartSLOduration=518.466093038 podStartE2EDuration="8m38.466093038s" podCreationTimestamp="2024-06-12 20:38:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-12 20:38:31.163200581 +0000 UTC m=+26.809492744" watchObservedRunningTime="2024-06-12 20:46:55.466093038 +0000 UTC m=+531.112385201"
	Jun 12 20:46:55 ha-957600 kubelet[2214]: I0612 20:46:55.469961    2214 topology_manager.go:215] "Topology Admit Handler" podUID="76d70f3c-f134-446f-8649-2f89690c9ae0" podNamespace="default" podName="busybox-fc5497c4f-q7zbt"
	Jun 12 20:46:55 ha-957600 kubelet[2214]: I0612 20:46:55.528666    2214 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntbk2\" (UniqueName: \"kubernetes.io/projected/76d70f3c-f134-446f-8649-2f89690c9ae0-kube-api-access-ntbk2\") pod \"busybox-fc5497c4f-q7zbt\" (UID: \"76d70f3c-f134-446f-8649-2f89690c9ae0\") " pod="default/busybox-fc5497c4f-q7zbt"
	Jun 12 20:46:56 ha-957600 kubelet[2214]: I0612 20:46:56.789676    2214 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2705e1162b2dfa56928107ee31e11cffe2a28d10a5ef252a20ac33fd3cd1e2c0"
	Jun 12 20:47:04 ha-957600 kubelet[2214]: E0612 20:47:04.624737    2214 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 20:47:04 ha-957600 kubelet[2214]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 20:47:04 ha-957600 kubelet[2214]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 20:47:04 ha-957600 kubelet[2214]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 20:47:04 ha-957600 kubelet[2214]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0612 13:47:53.042138    8360 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-957600 -n ha-957600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-957600 -n ha-957600: (12.6318299s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-957600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (69.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (104.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 node stop m02 -v=7 --alsologtostderr: (35.2511506s)
ha_test.go:369: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 status -v=7 --alsologtostderr
E0612 14:04:51.901924    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
ha_test.go:369: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-957600 status -v=7 --alsologtostderr: exit status 1 (34.0725916s)

                                                
                                                
** stderr ** 
	W0612 14:04:22.498924    7096 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0612 14:04:22.505974    7096 out.go:291] Setting OutFile to fd 1504 ...
	I0612 14:04:22.505974    7096 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 14:04:22.505974    7096 out.go:304] Setting ErrFile to fd 1560...
	I0612 14:04:22.507028    7096 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 14:04:22.523036    7096 out.go:298] Setting JSON to false
	I0612 14:04:22.523149    7096 mustload.go:65] Loading cluster: ha-957600
	I0612 14:04:22.523326    7096 notify.go:220] Checking for updates...
	I0612 14:04:22.524071    7096 config.go:182] Loaded profile config "ha-957600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 14:04:22.524071    7096 status.go:255] checking status of ha-957600 ...
	I0612 14:04:22.525183    7096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 14:04:24.813321    7096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:04:24.813321    7096 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:04:24.813321    7096 status.go:330] ha-957600 host status = "Running" (err=<nil>)
	I0612 14:04:24.813480    7096 host.go:66] Checking if "ha-957600" exists ...
	I0612 14:04:24.814158    7096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 14:04:27.096565    7096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:04:27.096648    7096 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:04:27.096648    7096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 14:04:29.823178    7096 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 14:04:29.823178    7096 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:04:29.823318    7096 host.go:66] Checking if "ha-957600" exists ...
	I0612 14:04:29.836970    7096 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 14:04:29.836970    7096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 14:04:32.085398    7096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:04:32.085398    7096 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:04:32.085398    7096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 14:04:34.744327    7096 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 14:04:34.744456    7096 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:04:34.744618    7096 sshutil.go:53] new ssh client: &{IP:172.23.203.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\id_rsa Username:docker}
	I0612 14:04:34.847303    7096 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.0103178s)
	I0612 14:04:34.859258    7096 ssh_runner.go:195] Run: systemctl --version
	I0612 14:04:34.882497    7096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 14:04:34.912318    7096 kubeconfig.go:125] found "ha-957600" server: "https://172.23.207.254:8443"
	I0612 14:04:34.912424    7096 api_server.go:166] Checking apiserver status ...
	I0612 14:04:34.926214    7096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 14:04:34.969916    7096 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2031/cgroup
	W0612 14:04:34.990059    7096 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2031/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0612 14:04:35.002791    7096 ssh_runner.go:195] Run: ls
	I0612 14:04:35.009819    7096 api_server.go:253] Checking apiserver healthz at https://172.23.207.254:8443/healthz ...
	I0612 14:04:35.017716    7096 api_server.go:279] https://172.23.207.254:8443/healthz returned 200:
	ok
	I0612 14:04:35.017716    7096 status.go:422] ha-957600 apiserver status = Running (err=<nil>)
	I0612 14:04:35.017976    7096 status.go:257] ha-957600 status: &{Name:ha-957600 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0612 14:04:35.017976    7096 status.go:255] checking status of ha-957600-m02 ...
	I0612 14:04:35.018697    7096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 14:04:37.126861    7096 main.go:141] libmachine: [stdout =====>] : Off
	
	I0612 14:04:37.127077    7096 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:04:37.127147    7096 status.go:330] ha-957600-m02 host status = "Stopped" (err=<nil>)
	I0612 14:04:37.127147    7096 status.go:343] host is not running, skipping remaining checks
	I0612 14:04:37.127147    7096 status.go:257] ha-957600-m02 status: &{Name:ha-957600-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0612 14:04:37.127147    7096 status.go:255] checking status of ha-957600-m03 ...
	I0612 14:04:37.127782    7096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 14:04:39.324972    7096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:04:39.324972    7096 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:04:39.324972    7096 status.go:330] ha-957600-m03 host status = "Running" (err=<nil>)
	I0612 14:04:39.325962    7096 host.go:66] Checking if "ha-957600-m03" exists ...
	I0612 14:04:39.326836    7096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 14:04:41.521012    7096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:04:41.521012    7096 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:04:41.521992    7096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 14:04:44.090832    7096 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 14:04:44.090832    7096 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:04:44.090832    7096 host.go:66] Checking if "ha-957600-m03" exists ...
	I0612 14:04:44.105253    7096 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 14:04:44.105253    7096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 14:04:46.256099    7096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:04:46.256099    7096 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:04:46.256217    7096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 14:04:48.866954    7096 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 14:04:48.866954    7096 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:04:48.867645    7096 sshutil.go:53] new ssh client: &{IP:172.23.207.166 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m03\id_rsa Username:docker}
	I0612 14:04:48.960452    7096 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8551839s)
	I0612 14:04:48.973983    7096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 14:04:49.001040    7096 kubeconfig.go:125] found "ha-957600" server: "https://172.23.207.254:8443"
	I0612 14:04:49.001040    7096 api_server.go:166] Checking apiserver status ...
	I0612 14:04:49.013214    7096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 14:04:49.057236    7096 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2298/cgroup
	W0612 14:04:49.078146    7096 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2298/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0612 14:04:49.090530    7096 ssh_runner.go:195] Run: ls
	I0612 14:04:49.097875    7096 api_server.go:253] Checking apiserver healthz at https://172.23.207.254:8443/healthz ...
	I0612 14:04:49.105746    7096 api_server.go:279] https://172.23.207.254:8443/healthz returned 200:
	ok
	I0612 14:04:49.105971    7096 status.go:422] ha-957600-m03 apiserver status = Running (err=<nil>)
	I0612 14:04:49.105971    7096 status.go:257] ha-957600-m03 status: &{Name:ha-957600-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0612 14:04:49.106201    7096 status.go:255] checking status of ha-957600-m04 ...
	I0612 14:04:49.107142    7096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m04 ).state
	I0612 14:04:51.246423    7096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:04:51.246423    7096 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:04:51.247459    7096 status.go:330] ha-957600-m04 host status = "Running" (err=<nil>)
	I0612 14:04:51.247459    7096 host.go:66] Checking if "ha-957600-m04" exists ...
	I0612 14:04:51.248652    7096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m04 ).state
	I0612 14:04:53.484954    7096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:04:53.484954    7096 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:04:53.485481    7096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m04 ).networkadapters[0]).ipaddresses[0]
	I0612 14:04:56.049987    7096 main.go:141] libmachine: [stdout =====>] : 172.23.205.43
	
	I0612 14:04:56.049987    7096 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:04:56.049987    7096 host.go:66] Checking if "ha-957600-m04" exists ...
	I0612 14:04:56.061701    7096 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 14:04:56.061701    7096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m04 ).state

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-957600 status -v=7 --alsologtostderr" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-957600 -n ha-957600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-957600 -n ha-957600: (12.4200586s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 logs -n 25: (8.8096193s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                            |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| cp      | ha-957600 cp ha-957600-m03:/home/docker/cp-test.txt                                                                       | ha-957600 | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:59 PDT | 12 Jun 24 13:59 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3720701902\001\cp-test_ha-957600-m03.txt |           |                   |         |                     |                     |
	| ssh     | ha-957600 ssh -n                                                                                                          | ha-957600 | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:59 PDT | 12 Jun 24 13:59 PDT |
	|         | ha-957600-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-957600 cp ha-957600-m03:/home/docker/cp-test.txt                                                                       | ha-957600 | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:59 PDT | 12 Jun 24 13:59 PDT |
	|         | ha-957600:/home/docker/cp-test_ha-957600-m03_ha-957600.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-957600 ssh -n                                                                                                          | ha-957600 | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:59 PDT | 12 Jun 24 13:59 PDT |
	|         | ha-957600-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-957600 ssh -n ha-957600 sudo cat                                                                                       | ha-957600 | minikube1\jenkins | v1.33.1 | 12 Jun 24 13:59 PDT | 12 Jun 24 14:00 PDT |
	|         | /home/docker/cp-test_ha-957600-m03_ha-957600.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-957600 cp ha-957600-m03:/home/docker/cp-test.txt                                                                       | ha-957600 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:00 PDT | 12 Jun 24 14:00 PDT |
	|         | ha-957600-m02:/home/docker/cp-test_ha-957600-m03_ha-957600-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-957600 ssh -n                                                                                                          | ha-957600 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:00 PDT | 12 Jun 24 14:00 PDT |
	|         | ha-957600-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-957600 ssh -n ha-957600-m02 sudo cat                                                                                   | ha-957600 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:00 PDT | 12 Jun 24 14:00 PDT |
	|         | /home/docker/cp-test_ha-957600-m03_ha-957600-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-957600 cp ha-957600-m03:/home/docker/cp-test.txt                                                                       | ha-957600 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:00 PDT | 12 Jun 24 14:00 PDT |
	|         | ha-957600-m04:/home/docker/cp-test_ha-957600-m03_ha-957600-m04.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-957600 ssh -n                                                                                                          | ha-957600 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:00 PDT | 12 Jun 24 14:01 PDT |
	|         | ha-957600-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-957600 ssh -n ha-957600-m04 sudo cat                                                                                   | ha-957600 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:01 PDT | 12 Jun 24 14:01 PDT |
	|         | /home/docker/cp-test_ha-957600-m03_ha-957600-m04.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-957600 cp testdata\cp-test.txt                                                                                         | ha-957600 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:01 PDT | 12 Jun 24 14:01 PDT |
	|         | ha-957600-m04:/home/docker/cp-test.txt                                                                                    |           |                   |         |                     |                     |
	| ssh     | ha-957600 ssh -n                                                                                                          | ha-957600 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:01 PDT | 12 Jun 24 14:01 PDT |
	|         | ha-957600-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-957600 cp ha-957600-m04:/home/docker/cp-test.txt                                                                       | ha-957600 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:01 PDT | 12 Jun 24 14:01 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3720701902\001\cp-test_ha-957600-m04.txt |           |                   |         |                     |                     |
	| ssh     | ha-957600 ssh -n                                                                                                          | ha-957600 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:01 PDT | 12 Jun 24 14:01 PDT |
	|         | ha-957600-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-957600 cp ha-957600-m04:/home/docker/cp-test.txt                                                                       | ha-957600 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:01 PDT | 12 Jun 24 14:02 PDT |
	|         | ha-957600:/home/docker/cp-test_ha-957600-m04_ha-957600.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-957600 ssh -n                                                                                                          | ha-957600 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:02 PDT | 12 Jun 24 14:02 PDT |
	|         | ha-957600-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-957600 ssh -n ha-957600 sudo cat                                                                                       | ha-957600 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:02 PDT | 12 Jun 24 14:02 PDT |
	|         | /home/docker/cp-test_ha-957600-m04_ha-957600.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-957600 cp ha-957600-m04:/home/docker/cp-test.txt                                                                       | ha-957600 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:02 PDT | 12 Jun 24 14:02 PDT |
	|         | ha-957600-m02:/home/docker/cp-test_ha-957600-m04_ha-957600-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-957600 ssh -n                                                                                                          | ha-957600 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:02 PDT | 12 Jun 24 14:03 PDT |
	|         | ha-957600-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-957600 ssh -n ha-957600-m02 sudo cat                                                                                   | ha-957600 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:03 PDT | 12 Jun 24 14:03 PDT |
	|         | /home/docker/cp-test_ha-957600-m04_ha-957600-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-957600 cp ha-957600-m04:/home/docker/cp-test.txt                                                                       | ha-957600 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:03 PDT | 12 Jun 24 14:03 PDT |
	|         | ha-957600-m03:/home/docker/cp-test_ha-957600-m04_ha-957600-m03.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-957600 ssh -n                                                                                                          | ha-957600 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:03 PDT | 12 Jun 24 14:03 PDT |
	|         | ha-957600-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-957600 ssh -n ha-957600-m03 sudo cat                                                                                   | ha-957600 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:03 PDT | 12 Jun 24 14:03 PDT |
	|         | /home/docker/cp-test_ha-957600-m04_ha-957600-m03.txt                                                                      |           |                   |         |                     |                     |
	| node    | ha-957600 node stop m02 -v=7                                                                                              | ha-957600 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:03 PDT | 12 Jun 24 14:04 PDT |
	|         | --alsologtostderr                                                                                                         |           |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/12 13:34:56
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0612 13:34:56.542237    7444 out.go:291] Setting OutFile to fd 1216 ...
	I0612 13:34:56.542237    7444 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 13:34:56.542237    7444 out.go:304] Setting ErrFile to fd 1552...
	I0612 13:34:56.542237    7444 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 13:34:56.569708    7444 out.go:298] Setting JSON to false
	I0612 13:34:56.572530    7444 start.go:129] hostinfo: {"hostname":"minikube1","uptime":22849,"bootTime":1718201647,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4529 Build 19045.4529","kernelVersion":"10.0.19045.4529 Build 19045.4529","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0612 13:34:56.572530    7444 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0612 13:34:56.579683    7444 out.go:177] * [ha-957600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	I0612 13:34:56.584327    7444 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 13:34:56.584134    7444 notify.go:220] Checking for updates...
	I0612 13:34:56.586832    7444 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 13:34:56.589473    7444 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0612 13:34:56.592013    7444 out.go:177]   - MINIKUBE_LOCATION=19044
	I0612 13:34:56.594373    7444 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 13:34:56.597436    7444 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 13:35:01.833778    7444 out.go:177] * Using the hyperv driver based on user configuration
	I0612 13:35:01.840588    7444 start.go:297] selected driver: hyperv
	I0612 13:35:01.840588    7444 start.go:901] validating driver "hyperv" against <nil>
	I0612 13:35:01.840588    7444 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 13:35:01.888640    7444 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0612 13:35:01.890173    7444 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 13:35:01.890173    7444 cni.go:84] Creating CNI manager for ""
	I0612 13:35:01.890173    7444 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0612 13:35:01.890173    7444 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0612 13:35:01.890724    7444 start.go:340] cluster config:
	{Name:ha-957600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-957600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 13:35:01.890786    7444 iso.go:125] acquiring lock: {Name:mk052eb609047b80b971cea5054470b0706b5b41 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 13:35:01.899687    7444 out.go:177] * Starting "ha-957600" primary control-plane node in "ha-957600" cluster
	I0612 13:35:01.903251    7444 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0612 13:35:01.903251    7444 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0612 13:35:01.903879    7444 cache.go:56] Caching tarball of preloaded images
	I0612 13:35:01.904060    7444 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0612 13:35:01.904369    7444 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0612 13:35:01.904485    7444 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\config.json ...
	I0612 13:35:01.905231    7444 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\config.json: {Name:mk8a5bf4016ab0a0e27781815d7a6f396d68f116 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:35:01.906447    7444 start.go:360] acquireMachinesLock for ha-957600: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 13:35:01.906447    7444 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-957600"
	I0612 13:35:01.906800    7444 start.go:93] Provisioning new machine with config: &{Name:ha-957600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-957600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0612 13:35:01.907050    7444 start.go:125] createHost starting for "" (driver="hyperv")
	I0612 13:35:01.913602    7444 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0612 13:35:01.913602    7444 start.go:159] libmachine.API.Create for "ha-957600" (driver="hyperv")
	I0612 13:35:01.913602    7444 client.go:168] LocalClient.Create starting
	I0612 13:35:01.914405    7444 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0612 13:35:01.914405    7444 main.go:141] libmachine: Decoding PEM data...
	I0612 13:35:01.914405    7444 main.go:141] libmachine: Parsing certificate...
	I0612 13:35:01.914405    7444 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0612 13:35:01.914405    7444 main.go:141] libmachine: Decoding PEM data...
	I0612 13:35:01.914405    7444 main.go:141] libmachine: Parsing certificate...
	I0612 13:35:01.914405    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0612 13:35:03.915690    7444 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0612 13:35:03.915690    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:03.915690    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0612 13:35:05.648973    7444 main.go:141] libmachine: [stdout =====>] : False
	
	I0612 13:35:05.648973    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:05.649870    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0612 13:35:07.126091    7444 main.go:141] libmachine: [stdout =====>] : True
	
	I0612 13:35:07.126441    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:07.126513    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0612 13:35:10.963077    7444 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0612 13:35:10.963077    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:10.965478    7444 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso...
	I0612 13:35:11.518935    7444 main.go:141] libmachine: Creating SSH key...
	I0612 13:35:11.923838    7444 main.go:141] libmachine: Creating VM...
	I0612 13:35:11.923838    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0612 13:35:14.792720    7444 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0612 13:35:14.792720    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:14.793525    7444 main.go:141] libmachine: Using switch "Default Switch"
	I0612 13:35:14.793525    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0612 13:35:16.551983    7444 main.go:141] libmachine: [stdout =====>] : True
	
	I0612 13:35:16.566172    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:16.566172    7444 main.go:141] libmachine: Creating VHD
	I0612 13:35:16.566172    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0612 13:35:20.323494    7444 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : CEE6B17B-45D3-4FF0-9DF7-237DC435A391
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0612 13:35:20.324319    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:20.324319    7444 main.go:141] libmachine: Writing magic tar header
	I0612 13:35:20.324440    7444 main.go:141] libmachine: Writing SSH key tar header
	I0612 13:35:20.334186    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0612 13:35:23.482850    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:35:23.482850    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:23.482850    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\disk.vhd' -SizeBytes 20000MB
	I0612 13:35:26.012868    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:35:26.013047    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:26.013103    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-957600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0612 13:35:29.684966    7444 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-957600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0612 13:35:29.684966    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:29.684966    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-957600 -DynamicMemoryEnabled $false
	I0612 13:35:31.928694    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:35:31.928812    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:31.928812    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-957600 -Count 2
	I0612 13:35:34.099615    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:35:34.099707    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:34.099707    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-957600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\boot2docker.iso'
	I0612 13:35:36.612780    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:35:36.612780    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:36.613122    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-957600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\disk.vhd'
	I0612 13:35:39.346262    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:35:39.347301    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:39.347301    7444 main.go:141] libmachine: Starting VM...
	I0612 13:35:39.347301    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-957600
	I0612 13:35:42.540047    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:35:42.540047    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:42.540047    7444 main.go:141] libmachine: Waiting for host to start...
	I0612 13:35:42.540047    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:35:44.802918    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:35:44.803633    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:44.803704    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:35:47.339590    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:35:47.339590    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:48.341681    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:35:50.539461    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:35:50.539556    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:50.539615    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:35:53.112618    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:35:53.112690    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:54.119513    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:35:56.317936    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:35:56.317936    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:56.318114    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:35:58.795148    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:35:58.795541    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:35:59.808387    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:36:02.035483    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:36:02.035548    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:02.035609    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:36:04.563510    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:36:04.564633    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:05.578819    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:36:07.822978    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:36:07.822978    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:07.823386    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:36:10.466760    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:36:10.467001    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:10.467280    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:36:12.606935    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:36:12.607534    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:12.607534    7444 machine.go:94] provisionDockerMachine start ...
	I0612 13:36:12.607534    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:36:14.774693    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:36:14.774693    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:14.775762    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:36:17.374045    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:36:17.374199    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:17.380004    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:36:17.391204    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.203.104 22 <nil> <nil>}
	I0612 13:36:17.391204    7444 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 13:36:17.531450    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 13:36:17.531450    7444 buildroot.go:166] provisioning hostname "ha-957600"
	I0612 13:36:17.531450    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:36:19.652524    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:36:19.652524    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:19.652918    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:36:22.170758    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:36:22.170758    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:22.176149    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:36:22.176965    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.203.104 22 <nil> <nil>}
	I0612 13:36:22.176965    7444 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-957600 && echo "ha-957600" | sudo tee /etc/hostname
	I0612 13:36:22.346621    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-957600
	
	I0612 13:36:22.346755    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:36:24.510894    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:36:24.511261    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:24.511401    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:36:27.039816    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:36:27.039816    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:27.046542    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:36:27.047339    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.203.104 22 <nil> <nil>}
	I0612 13:36:27.047339    7444 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-957600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-957600/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-957600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 13:36:27.203007    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 13:36:27.203007    7444 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0612 13:36:27.203007    7444 buildroot.go:174] setting up certificates
	I0612 13:36:27.203007    7444 provision.go:84] configureAuth start
	I0612 13:36:27.203007    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:36:29.368374    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:36:29.368374    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:29.368793    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:36:31.899036    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:36:31.899240    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:31.899357    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:36:33.997322    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:36:33.997322    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:33.998178    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:36:36.497718    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:36:36.497718    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:36.497946    7444 provision.go:143] copyHostCerts
	I0612 13:36:36.498071    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0612 13:36:36.498530    7444 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0612 13:36:36.498632    7444 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0612 13:36:36.499151    7444 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0612 13:36:36.500446    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0612 13:36:36.500621    7444 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0612 13:36:36.500621    7444 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0612 13:36:36.500621    7444 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0612 13:36:36.502242    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0612 13:36:36.502457    7444 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0612 13:36:36.502457    7444 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0612 13:36:36.502457    7444 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0612 13:36:36.503742    7444 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-957600 san=[127.0.0.1 172.23.203.104 ha-957600 localhost minikube]
	I0612 13:36:36.625953    7444 provision.go:177] copyRemoteCerts
	I0612 13:36:36.635736    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 13:36:36.636735    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:36:38.750317    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:36:38.750689    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:38.750689    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:36:41.242544    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:36:41.242658    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:41.242658    7444 sshutil.go:53] new ssh client: &{IP:172.23.203.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\id_rsa Username:docker}
	I0612 13:36:41.355617    7444 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7188672s)
	I0612 13:36:41.355617    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0612 13:36:41.355617    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0612 13:36:41.399652    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0612 13:36:41.400097    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 13:36:41.447183    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0612 13:36:41.447883    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0612 13:36:41.497281    7444 provision.go:87] duration metric: took 14.2941595s to configureAuth
	I0612 13:36:41.497281    7444 buildroot.go:189] setting minikube options for container-runtime
	I0612 13:36:41.497862    7444 config.go:182] Loaded profile config "ha-957600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 13:36:41.497902    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:36:43.582317    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:36:43.582563    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:43.582652    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:36:46.106834    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:36:46.107642    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:46.112950    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:36:46.113545    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.203.104 22 <nil> <nil>}
	I0612 13:36:46.113545    7444 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0612 13:36:46.256968    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0612 13:36:46.257071    7444 buildroot.go:70] root file system type: tmpfs
	I0612 13:36:46.257243    7444 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0612 13:36:46.257243    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:36:48.369564    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:36:48.369564    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:48.369564    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:36:50.882639    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:36:50.882705    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:50.887251    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:36:50.888223    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.203.104 22 <nil> <nil>}
	I0612 13:36:50.888223    7444 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0612 13:36:51.047753    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0612 13:36:51.047845    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:36:53.133977    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:36:53.134324    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:53.134324    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:36:55.681306    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:36:55.681306    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:55.687497    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:36:55.687497    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.203.104 22 <nil> <nil>}
	I0612 13:36:55.688042    7444 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0612 13:36:57.808076    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0612 13:36:57.808137    7444 machine.go:97] duration metric: took 45.2004637s to provisionDockerMachine
	I0612 13:36:57.808137    7444 client.go:171] duration metric: took 1m55.8941812s to LocalClient.Create
	I0612 13:36:57.808193    7444 start.go:167] duration metric: took 1m55.894238s to libmachine.API.Create "ha-957600"
	I0612 13:36:57.808321    7444 start.go:293] postStartSetup for "ha-957600" (driver="hyperv")
	I0612 13:36:57.808321    7444 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 13:36:57.819780    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 13:36:57.820889    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:36:59.935272    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:36:59.935854    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:36:59.935920    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:37:02.430030    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:37:02.431098    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:37:02.431098    7444 sshutil.go:53] new ssh client: &{IP:172.23.203.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\id_rsa Username:docker}
	I0612 13:37:02.544154    7444 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7243588s)
	I0612 13:37:02.556323    7444 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 13:37:02.563987    7444 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 13:37:02.564142    7444 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0612 13:37:02.564634    7444 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0612 13:37:02.565586    7444 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> 12802.pem in /etc/ssl/certs
	I0612 13:37:02.565586    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> /etc/ssl/certs/12802.pem
	I0612 13:37:02.577058    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 13:37:02.595360    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem --> /etc/ssl/certs/12802.pem (1708 bytes)
	I0612 13:37:02.641985    7444 start.go:296] duration metric: took 4.8336491s for postStartSetup
	I0612 13:37:02.645019    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:37:04.784175    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:37:04.784546    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:37:04.784897    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:37:07.259969    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:37:07.259969    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:37:07.259969    7444 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\config.json ...
	I0612 13:37:07.263496    7444 start.go:128] duration metric: took 2m5.3560633s to createHost
	I0612 13:37:07.263589    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:37:09.413529    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:37:09.413639    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:37:09.413639    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:37:11.937910    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:37:11.937910    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:37:11.944884    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:37:11.944884    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.203.104 22 <nil> <nil>}
	I0612 13:37:11.944884    7444 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 13:37:12.088661    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718224632.091842522
	
	I0612 13:37:12.088661    7444 fix.go:216] guest clock: 1718224632.091842522
	I0612 13:37:12.088661    7444 fix.go:229] Guest: 2024-06-12 13:37:12.091842522 -0700 PDT Remote: 2024-06-12 13:37:07.2635896 -0700 PDT m=+130.806402601 (delta=4.828252922s)
	I0612 13:37:12.088661    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:37:14.220293    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:37:14.220293    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:37:14.220293    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:37:16.729661    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:37:16.729661    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:37:16.735559    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:37:16.735559    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.203.104 22 <nil> <nil>}
	I0612 13:37:16.735559    7444 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1718224632
	I0612 13:37:16.877631    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jun 12 20:37:12 UTC 2024
	
	I0612 13:37:16.877631    7444 fix.go:236] clock set: Wed Jun 12 20:37:12 UTC 2024
	 (err=<nil>)
	I0612 13:37:16.877631    7444 start.go:83] releasing machines lock for "ha-957600", held for 2m14.9704791s
	I0612 13:37:16.877631    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:37:19.036041    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:37:19.036575    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:37:19.036575    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:37:21.534178    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:37:21.534178    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:37:21.539333    7444 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 13:37:21.539431    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:37:21.551466    7444 ssh_runner.go:195] Run: cat /version.json
	I0612 13:37:21.551632    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:37:23.743902    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:37:23.744088    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:37:23.744176    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:37:23.752775    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:37:23.752775    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:37:23.752775    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:37:26.360477    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:37:26.360477    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:37:26.360748    7444 sshutil.go:53] new ssh client: &{IP:172.23.203.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\id_rsa Username:docker}
	I0612 13:37:26.396519    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:37:26.396519    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:37:26.397271    7444 sshutil.go:53] new ssh client: &{IP:172.23.203.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\id_rsa Username:docker}
	I0612 13:37:26.515102    7444 ssh_runner.go:235] Completed: cat /version.json: (4.9624684s)
	I0612 13:37:26.515102    7444 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9757171s)
	I0612 13:37:26.530566    7444 ssh_runner.go:195] Run: systemctl --version
	I0612 13:37:26.552035    7444 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0612 13:37:26.560153    7444 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 13:37:26.572971    7444 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 13:37:26.604819    7444 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 13:37:26.604917    7444 start.go:494] detecting cgroup driver to use...
	I0612 13:37:26.604917    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 13:37:26.658600    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0612 13:37:26.696089    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0612 13:37:26.718226    7444 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0612 13:37:26.733936    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0612 13:37:26.771089    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0612 13:37:26.802851    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0612 13:37:26.834244    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0612 13:37:26.870154    7444 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 13:37:26.906061    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0612 13:37:26.940386    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0612 13:37:26.973956    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0612 13:37:27.009706    7444 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 13:37:27.047191    7444 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 13:37:27.081976    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:37:27.293221    7444 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0612 13:37:27.325466    7444 start.go:494] detecting cgroup driver to use...
	I0612 13:37:27.339386    7444 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0612 13:37:27.379251    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 13:37:27.415507    7444 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 13:37:27.461514    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 13:37:27.500264    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0612 13:37:27.538787    7444 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0612 13:37:27.610224    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0612 13:37:27.636612    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 13:37:27.688637    7444 ssh_runner.go:195] Run: which cri-dockerd
	I0612 13:37:27.707187    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0612 13:37:27.727346    7444 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0612 13:37:27.771607    7444 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0612 13:37:27.991901    7444 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0612 13:37:28.192210    7444 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0612 13:37:28.192516    7444 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0612 13:37:28.236500    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:37:28.443355    7444 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0612 13:37:30.980177    7444 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5368145s)
	I0612 13:37:30.992534    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0612 13:37:31.029135    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0612 13:37:31.062643    7444 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0612 13:37:31.261297    7444 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0612 13:37:31.477180    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:37:31.672377    7444 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0612 13:37:31.714713    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0612 13:37:31.751314    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:37:31.933908    7444 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0612 13:37:32.042389    7444 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0612 13:37:32.059089    7444 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0612 13:37:32.068335    7444 start.go:562] Will wait 60s for crictl version
	I0612 13:37:32.080328    7444 ssh_runner.go:195] Run: which crictl
	I0612 13:37:32.106785    7444 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 13:37:32.163497    7444 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0612 13:37:32.173814    7444 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0612 13:37:32.217995    7444 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0612 13:37:32.253189    7444 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0612 13:37:32.253360    7444 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0612 13:37:32.258119    7444 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0612 13:37:32.258119    7444 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0612 13:37:32.258119    7444 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0612 13:37:32.258119    7444 ip.go:207] Found interface: {Index:16 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:56:a0:18 Flags:up|broadcast|multicast|running}
	I0612 13:37:32.262304    7444 ip.go:210] interface addr: fe80::52c5:dd8:dd1e:a400/64
	I0612 13:37:32.262350    7444 ip.go:210] interface addr: 172.23.192.1/20
	I0612 13:37:32.275181    7444 ssh_runner.go:195] Run: grep 172.23.192.1	host.minikube.internal$ /etc/hosts
	I0612 13:37:32.282339    7444 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 13:37:32.319049    7444 kubeadm.go:877] updating cluster {Name:ha-957600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1
ClusterName:ha-957600 Namespace:default APIServerHAVIP:172.23.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.203.104 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 13:37:32.319049    7444 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0612 13:37:32.331943    7444 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0612 13:37:32.353638    7444 docker.go:685] Got preloaded images: 
	I0612 13:37:32.353638    7444 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0612 13:37:32.367429    7444 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0612 13:37:32.397888    7444 ssh_runner.go:195] Run: which lz4
	I0612 13:37:32.404497    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0612 13:37:32.417918    7444 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0612 13:37:32.423770    7444 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0612 13:37:32.424763    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0612 13:37:34.287633    7444 docker.go:649] duration metric: took 1.882155s to copy over tarball
	I0612 13:37:34.299774    7444 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0612 13:37:42.818860    7444 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.5188951s)
	I0612 13:37:42.818860    7444 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0612 13:37:42.888444    7444 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0612 13:37:42.909759    7444 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0612 13:37:42.959732    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:37:43.189785    7444 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0612 13:37:46.150583    7444 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.9607884s)
	I0612 13:37:46.160124    7444 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0612 13:37:46.191664    7444 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0612 13:37:46.191740    7444 cache_images.go:84] Images are preloaded, skipping loading
	I0612 13:37:46.191740    7444 kubeadm.go:928] updating node { 172.23.203.104 8443 v1.30.1 docker true true} ...
	I0612 13:37:46.192064    7444 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-957600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.203.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-957600 Namespace:default APIServerHAVIP:172.23.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 13:37:46.202510    7444 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0612 13:37:46.239519    7444 cni.go:84] Creating CNI manager for ""
	I0612 13:37:46.239613    7444 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0612 13:37:46.239613    7444 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 13:37:46.239725    7444 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.23.203.104 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-957600 NodeName:ha-957600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.23.203.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.23.203.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 13:37:46.239992    7444 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.23.203.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-957600"
	  kubeletExtraArgs:
	    node-ip: 172.23.203.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.23.203.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 13:37:46.240081    7444 kube-vip.go:115] generating kube-vip config ...
	I0612 13:37:46.252078    7444 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0612 13:37:46.278209    7444 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0612 13:37:46.278209    7444 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.23.207.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0612 13:37:46.289375    7444 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 13:37:46.314560    7444 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 13:37:46.330080    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0612 13:37:46.352034    7444 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0612 13:37:46.383560    7444 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 13:37:46.416500    7444 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0612 13:37:46.448054    7444 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0612 13:37:46.490410    7444 ssh_runner.go:195] Run: grep 172.23.207.254	control-plane.minikube.internal$ /etc/hosts
	I0612 13:37:46.495364    7444 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.207.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 13:37:46.528393    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:37:46.729154    7444 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 13:37:46.759505    7444 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600 for IP: 172.23.203.104
	I0612 13:37:46.759550    7444 certs.go:194] generating shared ca certs ...
	I0612 13:37:46.759633    7444 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:37:46.760428    7444 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0612 13:37:46.760913    7444 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0612 13:37:46.761207    7444 certs.go:256] generating profile certs ...
	I0612 13:37:46.761932    7444 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\client.key
	I0612 13:37:46.761932    7444 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\client.crt with IP's: []
	I0612 13:37:47.362697    7444 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\client.crt ...
	I0612 13:37:47.362697    7444 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\client.crt: {Name:mkd4d63a91baf2e65e053f36cc6b43511c7c6e0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:37:47.364600    7444 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\client.key ...
	I0612 13:37:47.364600    7444 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\client.key: {Name:mkeefa0efc3694c7552816886ab96188c0feac77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:37:47.368020    7444 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key.2cdbb0d2
	I0612 13:37:47.368020    7444 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt.2cdbb0d2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.23.203.104 172.23.207.254]
	I0612 13:37:47.614086    7444 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt.2cdbb0d2 ...
	I0612 13:37:47.614086    7444 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt.2cdbb0d2: {Name:mk0a37a7a02e561559da747eb9992ef106e73eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:37:47.615338    7444 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key.2cdbb0d2 ...
	I0612 13:37:47.616386    7444 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key.2cdbb0d2: {Name:mk6cb6877f38d518fea7ca584fab3b00ed6037ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:37:47.616657    7444 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt.2cdbb0d2 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt
	I0612 13:37:47.628757    7444 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key.2cdbb0d2 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key
	I0612 13:37:47.629772    7444 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.key
	I0612 13:37:47.630955    7444 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.crt with IP's: []
	I0612 13:37:47.738742    7444 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.crt ...
	I0612 13:37:47.738742    7444 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.crt: {Name:mk8571c0058e2ae080ac64e930a9dddcf6a91373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:37:47.739749    7444 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.key ...
	I0612 13:37:47.739749    7444 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.key: {Name:mk57b524a6182d5adbbee38d20828d8cb4c5c621 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:37:47.740740    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0612 13:37:47.741416    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0612 13:37:47.741617    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0612 13:37:47.741825    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0612 13:37:47.741958    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0612 13:37:47.742105    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0612 13:37:47.742255    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0612 13:37:47.751846    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0612 13:37:47.753841    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem (1338 bytes)
	W0612 13:37:47.753841    7444 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280_empty.pem, impossibly tiny 0 bytes
	I0612 13:37:47.753841    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0612 13:37:47.754853    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0612 13:37:47.754853    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0612 13:37:47.754853    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0612 13:37:47.755838    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem (1708 bytes)
	I0612 13:37:47.755838    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> /usr/share/ca-certificates/12802.pem
	I0612 13:37:47.755838    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0612 13:37:47.755838    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem -> /usr/share/ca-certificates/1280.pem
	I0612 13:37:47.756859    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 13:37:47.802817    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 13:37:47.842833    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 13:37:47.893372    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0612 13:37:47.942414    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0612 13:37:47.991186    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0612 13:37:48.041399    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 13:37:48.091411    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 13:37:48.142182    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem --> /usr/share/ca-certificates/12802.pem (1708 bytes)
	I0612 13:37:48.189745    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 13:37:48.249101    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem --> /usr/share/ca-certificates/1280.pem (1338 bytes)
	I0612 13:37:48.294060    7444 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 13:37:48.334672    7444 ssh_runner.go:195] Run: openssl version
	I0612 13:37:48.353387    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1280.pem && ln -fs /usr/share/ca-certificates/1280.pem /etc/ssl/certs/1280.pem"
	I0612 13:37:48.385542    7444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1280.pem
	I0612 13:37:48.394014    7444 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:15 /usr/share/ca-certificates/1280.pem
	I0612 13:37:48.406311    7444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1280.pem
	I0612 13:37:48.430041    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1280.pem /etc/ssl/certs/51391683.0"
	I0612 13:37:48.472481    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12802.pem && ln -fs /usr/share/ca-certificates/12802.pem /etc/ssl/certs/12802.pem"
	I0612 13:37:48.505747    7444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12802.pem
	I0612 13:37:48.512366    7444 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:15 /usr/share/ca-certificates/12802.pem
	I0612 13:37:48.522348    7444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12802.pem
	I0612 13:37:48.542948    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12802.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 13:37:48.578690    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 13:37:48.608485    7444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 13:37:48.616009    7444 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:00 /usr/share/ca-certificates/minikubeCA.pem
	I0612 13:37:48.628250    7444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 13:37:48.652127    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 13:37:48.686731    7444 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 13:37:48.693974    7444 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0612 13:37:48.694491    7444 kubeadm.go:391] StartCluster: {Name:ha-957600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clu
sterName:ha-957600 Namespace:default APIServerHAVIP:172.23.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.203.104 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 13:37:48.702819    7444 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0612 13:37:48.743519    7444 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0612 13:37:48.774371    7444 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 13:37:48.805671    7444 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 13:37:48.823488    7444 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 13:37:48.823488    7444 kubeadm.go:156] found existing configuration files:
	
	I0612 13:37:48.837851    7444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 13:37:48.854010    7444 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 13:37:48.865309    7444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 13:37:48.894657    7444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 13:37:48.911259    7444 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 13:37:48.925520    7444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 13:37:48.956374    7444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 13:37:48.973918    7444 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 13:37:48.985443    7444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 13:37:49.014074    7444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 13:37:49.031290    7444 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 13:37:49.042373    7444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 13:37:49.062944    7444 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 13:37:49.500936    7444 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 13:38:05.056892    7444 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0612 13:38:05.057019    7444 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 13:38:05.057386    7444 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 13:38:05.057688    7444 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 13:38:05.057902    7444 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 13:38:05.057902    7444 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 13:38:05.060687    7444 out.go:204]   - Generating certificates and keys ...
	I0612 13:38:05.061111    7444 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 13:38:05.061209    7444 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 13:38:05.061209    7444 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0612 13:38:05.061209    7444 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0612 13:38:05.061209    7444 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0612 13:38:05.061209    7444 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0612 13:38:05.061793    7444 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0612 13:38:05.062069    7444 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-957600 localhost] and IPs [172.23.203.104 127.0.0.1 ::1]
	I0612 13:38:05.062155    7444 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0612 13:38:05.062244    7444 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-957600 localhost] and IPs [172.23.203.104 127.0.0.1 ::1]
	I0612 13:38:05.062244    7444 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0612 13:38:05.062786    7444 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0612 13:38:05.062976    7444 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0612 13:38:05.063057    7444 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 13:38:05.063221    7444 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 13:38:05.063381    7444 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0612 13:38:05.063479    7444 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 13:38:05.063703    7444 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 13:38:05.063864    7444 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 13:38:05.063895    7444 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 13:38:05.063895    7444 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 13:38:05.066920    7444 out.go:204]   - Booting up control plane ...
	I0612 13:38:05.066920    7444 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 13:38:05.067444    7444 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 13:38:05.067444    7444 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 13:38:05.067736    7444 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 13:38:05.068265    7444 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 13:38:05.068317    7444 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 13:38:05.068674    7444 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0612 13:38:05.068674    7444 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0612 13:38:05.068674    7444 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002410575s
	I0612 13:38:05.069235    7444 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0612 13:38:05.069235    7444 kubeadm.go:309] [api-check] The API server is healthy after 8.92993329s
	I0612 13:38:05.069235    7444 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0612 13:38:05.069235    7444 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0612 13:38:05.069826    7444 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0612 13:38:05.071806    7444 kubeadm.go:309] [mark-control-plane] Marking the node ha-957600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0612 13:38:05.071806    7444 kubeadm.go:309] [bootstrap-token] Using token: 6td0sr.fr4ba9t8fayocxit
	I0612 13:38:05.075385    7444 out.go:204]   - Configuring RBAC rules ...
	I0612 13:38:05.075548    7444 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0612 13:38:05.075548    7444 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0612 13:38:05.075548    7444 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0612 13:38:05.075548    7444 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0612 13:38:05.076380    7444 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0612 13:38:05.076380    7444 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0612 13:38:05.076380    7444 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0612 13:38:05.076380    7444 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0612 13:38:05.076380    7444 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0612 13:38:05.076380    7444 kubeadm.go:309] 
	I0612 13:38:05.077384    7444 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0612 13:38:05.077384    7444 kubeadm.go:309] 
	I0612 13:38:05.077384    7444 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0612 13:38:05.077384    7444 kubeadm.go:309] 
	I0612 13:38:05.077384    7444 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0612 13:38:05.077384    7444 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0612 13:38:05.077384    7444 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0612 13:38:05.077384    7444 kubeadm.go:309] 
	I0612 13:38:05.077384    7444 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0612 13:38:05.077384    7444 kubeadm.go:309] 
	I0612 13:38:05.078378    7444 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0612 13:38:05.078378    7444 kubeadm.go:309] 
	I0612 13:38:05.078378    7444 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0612 13:38:05.078378    7444 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0612 13:38:05.078378    7444 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0612 13:38:05.078378    7444 kubeadm.go:309] 
	I0612 13:38:05.078378    7444 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0612 13:38:05.078378    7444 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0612 13:38:05.078378    7444 kubeadm.go:309] 
	I0612 13:38:05.079382    7444 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 6td0sr.fr4ba9t8fayocxit \
	I0612 13:38:05.079382    7444 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:10c04e0412ada9d72a46398cbb6ecb6de5efcad2d747fb615b7e984406c55dc5 \
	I0612 13:38:05.079382    7444 kubeadm.go:309] 	--control-plane 
	I0612 13:38:05.079382    7444 kubeadm.go:309] 
	I0612 13:38:05.079382    7444 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0612 13:38:05.079382    7444 kubeadm.go:309] 
	I0612 13:38:05.079382    7444 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 6td0sr.fr4ba9t8fayocxit \
	I0612 13:38:05.080381    7444 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:10c04e0412ada9d72a46398cbb6ecb6de5efcad2d747fb615b7e984406c55dc5 
	I0612 13:38:05.080381    7444 cni.go:84] Creating CNI manager for ""
	I0612 13:38:05.080381    7444 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0612 13:38:05.082083    7444 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0612 13:38:05.098184    7444 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0612 13:38:05.107200    7444 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0612 13:38:05.107200    7444 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0612 13:38:05.154645    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0612 13:38:05.734387    7444 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0612 13:38:05.749475    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:05.753032    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-957600 minikube.k8s.io/updated_at=2024_06_12T13_38_05_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97 minikube.k8s.io/name=ha-957600 minikube.k8s.io/primary=true
	I0612 13:38:05.770548    7444 ops.go:34] apiserver oom_adj: -16
	I0612 13:38:05.995827    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:06.498890    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:07.001767    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:07.502706    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:08.007651    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:08.512858    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:09.000782    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:09.501782    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:10.002749    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:10.504591    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:11.008799    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:11.496997    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:12.009568    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:12.499765    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:13.002627    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:13.505702    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:14.007806    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:14.508788    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:14.998380    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:15.499001    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:16.001721    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:16.505607    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 13:38:16.646029    7444 kubeadm.go:1107] duration metric: took 10.9115159s to wait for elevateKubeSystemPrivileges
	W0612 13:38:16.646188    7444 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0612 13:38:16.646188    7444 kubeadm.go:393] duration metric: took 27.9516118s to StartCluster
	I0612 13:38:16.646244    7444 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:38:16.646464    7444 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 13:38:16.648111    7444 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:38:16.649635    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0612 13:38:16.649820    7444 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.23.203.104 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0612 13:38:16.649865    7444 start.go:240] waiting for startup goroutines ...
	I0612 13:38:16.649972    7444 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0612 13:38:16.650135    7444 addons.go:69] Setting storage-provisioner=true in profile "ha-957600"
	I0612 13:38:16.650212    7444 addons.go:234] Setting addon storage-provisioner=true in "ha-957600"
	I0612 13:38:16.650212    7444 config.go:182] Loaded profile config "ha-957600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 13:38:16.650335    7444 host.go:66] Checking if "ha-957600" exists ...
	I0612 13:38:16.650212    7444 addons.go:69] Setting default-storageclass=true in profile "ha-957600"
	I0612 13:38:16.650464    7444 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-957600"
	I0612 13:38:16.651492    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:38:16.651492    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:38:16.852840    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.23.192.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0612 13:38:17.202169    7444 start.go:946] {"host.minikube.internal": 172.23.192.1} host record injected into CoreDNS's ConfigMap
	I0612 13:38:18.926279    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:38:18.926279    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:18.926279    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:38:18.926279    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:18.930896    7444 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 13:38:18.928577    7444 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 13:38:18.934510    7444 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 13:38:18.934510    7444 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0612 13:38:18.934510    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:38:18.935318    7444 kapi.go:59] client config for ha-957600: &rest.Config{Host:"https://172.23.207.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-957600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-957600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x288e1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0612 13:38:18.936044    7444 cert_rotation.go:137] Starting client certificate rotation controller
	I0612 13:38:18.936946    7444 addons.go:234] Setting addon default-storageclass=true in "ha-957600"
	I0612 13:38:18.936946    7444 host.go:66] Checking if "ha-957600" exists ...
	I0612 13:38:18.938136    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:38:21.265178    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:38:21.265178    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:21.265178    7444 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0612 13:38:21.265178    7444 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0612 13:38:21.265178    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:38:21.421446    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:38:21.421446    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:21.421446    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:38:23.530843    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:38:23.530843    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:23.531044    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:38:24.101883    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:38:24.101883    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:24.102214    7444 sshutil.go:53] new ssh client: &{IP:172.23.203.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\id_rsa Username:docker}
	I0612 13:38:24.262017    7444 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 13:38:26.190338    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:38:26.190338    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:26.191152    7444 sshutil.go:53] new ssh client: &{IP:172.23.203.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\id_rsa Username:docker}
	I0612 13:38:26.327756    7444 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0612 13:38:26.498949    7444 round_trippers.go:463] GET https://172.23.207.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0612 13:38:26.499013    7444 round_trippers.go:469] Request Headers:
	I0612 13:38:26.499013    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:38:26.499013    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:38:26.511158    7444 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0612 13:38:26.513156    7444 round_trippers.go:463] PUT https://172.23.207.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0612 13:38:26.513249    7444 round_trippers.go:469] Request Headers:
	I0612 13:38:26.513249    7444 round_trippers.go:473]     Content-Type: application/json
	I0612 13:38:26.513249    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:38:26.513249    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:38:26.520324    7444 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 13:38:26.524055    7444 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0612 13:38:26.526381    7444 addons.go:510] duration metric: took 9.8764058s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0612 13:38:26.526448    7444 start.go:245] waiting for cluster config update ...
	I0612 13:38:26.526448    7444 start.go:254] writing updated cluster config ...
	I0612 13:38:26.531505    7444 out.go:177] 
	I0612 13:38:26.540073    7444 config.go:182] Loaded profile config "ha-957600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 13:38:26.540073    7444 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\config.json ...
	I0612 13:38:26.545767    7444 out.go:177] * Starting "ha-957600-m02" control-plane node in "ha-957600" cluster
	I0612 13:38:26.548438    7444 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0612 13:38:26.548438    7444 cache.go:56] Caching tarball of preloaded images
	I0612 13:38:26.549107    7444 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0612 13:38:26.549435    7444 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0612 13:38:26.549435    7444 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\config.json ...
	I0612 13:38:26.553331    7444 start.go:360] acquireMachinesLock for ha-957600-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 13:38:26.554255    7444 start.go:364] duration metric: took 923.8µs to acquireMachinesLock for "ha-957600-m02"
	I0612 13:38:26.554313    7444 start.go:93] Provisioning new machine with config: &{Name:ha-957600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-957600 Namespace:default APIServerHAVIP:172.23.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.203.104 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0612 13:38:26.554313    7444 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0612 13:38:26.556402    7444 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0612 13:38:26.556402    7444 start.go:159] libmachine.API.Create for "ha-957600" (driver="hyperv")
	I0612 13:38:26.557318    7444 client.go:168] LocalClient.Create starting
	I0612 13:38:26.557429    7444 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0612 13:38:26.557941    7444 main.go:141] libmachine: Decoding PEM data...
	I0612 13:38:26.557941    7444 main.go:141] libmachine: Parsing certificate...
	I0612 13:38:26.557941    7444 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0612 13:38:26.558658    7444 main.go:141] libmachine: Decoding PEM data...
	I0612 13:38:26.558658    7444 main.go:141] libmachine: Parsing certificate...
	I0612 13:38:26.558658    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0612 13:38:28.534318    7444 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0612 13:38:28.534318    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:28.534318    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0612 13:38:30.293730    7444 main.go:141] libmachine: [stdout =====>] : False
	
	I0612 13:38:30.293730    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:30.294018    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0612 13:38:31.769159    7444 main.go:141] libmachine: [stdout =====>] : True
	
	I0612 13:38:31.769159    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:31.769442    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0612 13:38:35.444465    7444 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0612 13:38:35.444465    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:35.447599    7444 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso...
	I0612 13:38:35.935904    7444 main.go:141] libmachine: Creating SSH key...
	I0612 13:38:36.302100    7444 main.go:141] libmachine: Creating VM...
	I0612 13:38:36.302678    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0612 13:38:39.249913    7444 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0612 13:38:39.251043    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:39.251178    7444 main.go:141] libmachine: Using switch "Default Switch"
	I0612 13:38:39.251178    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0612 13:38:41.006758    7444 main.go:141] libmachine: [stdout =====>] : True
	
	I0612 13:38:41.006758    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:41.006758    7444 main.go:141] libmachine: Creating VHD
	I0612 13:38:41.007373    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0612 13:38:44.914666    7444 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 214CD393-3C4C-4E43-B696-A1BFA3CB3E3D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0612 13:38:44.914666    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:44.914666    7444 main.go:141] libmachine: Writing magic tar header
	I0612 13:38:44.914926    7444 main.go:141] libmachine: Writing SSH key tar header
	I0612 13:38:44.926522    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0612 13:38:48.115717    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:38:48.115717    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:48.115717    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m02\disk.vhd' -SizeBytes 20000MB
	I0612 13:38:50.668958    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:38:50.669053    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:50.669149    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-957600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0612 13:38:54.339010    7444 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-957600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0612 13:38:54.339328    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:54.339328    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-957600-m02 -DynamicMemoryEnabled $false
	I0612 13:38:56.612252    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:38:56.612252    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:56.613264    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-957600-m02 -Count 2
	I0612 13:38:58.785257    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:38:58.785759    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:38:58.785759    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-957600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m02\boot2docker.iso'
	I0612 13:39:01.349335    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:39:01.349582    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:01.349692    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-957600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m02\disk.vhd'
	I0612 13:39:03.977118    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:39:03.978132    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:03.978132    7444 main.go:141] libmachine: Starting VM...
	I0612 13:39:03.978242    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-957600-m02
	I0612 13:39:07.027954    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:39:07.027954    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:07.027954    7444 main.go:141] libmachine: Waiting for host to start...
	I0612 13:39:07.028148    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:39:09.374184    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:39:09.374940    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:09.375255    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:39:11.950459    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:39:11.951440    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:12.957673    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:39:15.244330    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:39:15.244330    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:15.244330    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:39:17.850575    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:39:17.850685    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:18.857228    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:39:21.102729    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:39:21.102787    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:21.102787    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:39:23.670281    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:39:23.670281    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:24.683065    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:39:26.956345    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:39:26.956555    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:26.956555    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:39:29.589038    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:39:29.589038    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:30.602632    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:39:32.822026    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:39:32.822026    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:32.822779    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:39:35.421893    7444 main.go:141] libmachine: [stdout =====>] : 172.23.201.185
	
	I0612 13:39:35.421893    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:35.422932    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:39:37.559103    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:39:37.559103    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:37.559480    7444 machine.go:94] provisionDockerMachine start ...
	I0612 13:39:37.559480    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:39:39.701029    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:39:39.701029    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:39.701851    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:39:42.269798    7444 main.go:141] libmachine: [stdout =====>] : 172.23.201.185
	
	I0612 13:39:42.269798    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:42.275072    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:39:42.286579    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.201.185 22 <nil> <nil>}
	I0612 13:39:42.286579    7444 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 13:39:42.424161    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 13:39:42.424161    7444 buildroot.go:166] provisioning hostname "ha-957600-m02"
	I0612 13:39:42.424933    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:39:44.554001    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:39:44.554001    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:44.554001    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:39:47.135454    7444 main.go:141] libmachine: [stdout =====>] : 172.23.201.185
	
	I0612 13:39:47.135454    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:47.141610    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:39:47.142322    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.201.185 22 <nil> <nil>}
	I0612 13:39:47.142322    7444 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-957600-m02 && echo "ha-957600-m02" | sudo tee /etc/hostname
	I0612 13:39:47.305875    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-957600-m02
	
	I0612 13:39:47.305987    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:39:49.433522    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:39:49.433923    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:49.434068    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:39:51.964941    7444 main.go:141] libmachine: [stdout =====>] : 172.23.201.185
	
	I0612 13:39:51.964941    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:51.971469    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:39:51.971997    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.201.185 22 <nil> <nil>}
	I0612 13:39:51.971997    7444 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-957600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-957600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-957600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 13:39:52.119423    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 13:39:52.119423    7444 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0612 13:39:52.119423    7444 buildroot.go:174] setting up certificates
	I0612 13:39:52.119423    7444 provision.go:84] configureAuth start
	I0612 13:39:52.119423    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:39:54.239993    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:39:54.240673    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:54.240673    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:39:56.832040    7444 main.go:141] libmachine: [stdout =====>] : 172.23.201.185
	
	I0612 13:39:56.832237    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:56.832333    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:39:59.007402    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:39:59.007402    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:39:59.007654    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:40:01.551689    7444 main.go:141] libmachine: [stdout =====>] : 172.23.201.185
	
	I0612 13:40:01.554891    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:01.554891    7444 provision.go:143] copyHostCerts
	I0612 13:40:01.555234    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0612 13:40:01.555663    7444 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0612 13:40:01.555743    7444 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0612 13:40:01.556240    7444 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0612 13:40:01.557448    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0612 13:40:01.557733    7444 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0612 13:40:01.557819    7444 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0612 13:40:01.558166    7444 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0612 13:40:01.559021    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0612 13:40:01.559021    7444 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0612 13:40:01.559021    7444 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0612 13:40:01.559862    7444 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0612 13:40:01.560793    7444 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-957600-m02 san=[127.0.0.1 172.23.201.185 ha-957600-m02 localhost minikube]
	I0612 13:40:01.636530    7444 provision.go:177] copyRemoteCerts
	I0612 13:40:01.649537    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 13:40:01.649537    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:40:03.794049    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:40:03.794049    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:03.794412    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:40:06.326325    7444 main.go:141] libmachine: [stdout =====>] : 172.23.201.185
	
	I0612 13:40:06.326325    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:06.326891    7444 sshutil.go:53] new ssh client: &{IP:172.23.201.185 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m02\id_rsa Username:docker}
	I0612 13:40:06.439069    7444 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7891683s)
	I0612 13:40:06.439130    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0612 13:40:06.439689    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 13:40:06.484150    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0612 13:40:06.484590    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0612 13:40:06.529103    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0612 13:40:06.529566    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0612 13:40:06.577929    7444 provision.go:87] duration metric: took 14.4584633s to configureAuth
	I0612 13:40:06.577993    7444 buildroot.go:189] setting minikube options for container-runtime
	I0612 13:40:06.578556    7444 config.go:182] Loaded profile config "ha-957600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 13:40:06.578680    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:40:08.718496    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:40:08.718496    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:08.718677    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:40:11.328555    7444 main.go:141] libmachine: [stdout =====>] : 172.23.201.185
	
	I0612 13:40:11.328555    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:11.334940    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:40:11.335759    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.201.185 22 <nil> <nil>}
	I0612 13:40:11.335759    7444 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0612 13:40:11.478737    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0612 13:40:11.478768    7444 buildroot.go:70] root file system type: tmpfs
	I0612 13:40:11.478960    7444 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0612 13:40:11.478960    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:40:13.660235    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:40:13.660235    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:13.660235    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:40:16.213094    7444 main.go:141] libmachine: [stdout =====>] : 172.23.201.185
	
	I0612 13:40:16.213094    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:16.220956    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:40:16.221130    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.201.185 22 <nil> <nil>}
	I0612 13:40:16.221130    7444 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.23.203.104"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0612 13:40:16.387145    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.23.203.104
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0612 13:40:16.387254    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:40:18.572173    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:40:18.572449    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:18.572449    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:40:21.162854    7444 main.go:141] libmachine: [stdout =====>] : 172.23.201.185
	
	I0612 13:40:21.162854    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:21.169369    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:40:21.169369    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.201.185 22 <nil> <nil>}
	I0612 13:40:21.169369    7444 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0612 13:40:23.352425    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0612 13:40:23.352425    7444 machine.go:97] duration metric: took 45.792807s to provisionDockerMachine
	I0612 13:40:23.352425    7444 client.go:171] duration metric: took 1m56.7947565s to LocalClient.Create
	I0612 13:40:23.352425    7444 start.go:167] duration metric: took 1m56.7956727s to libmachine.API.Create "ha-957600"
	I0612 13:40:23.352425    7444 start.go:293] postStartSetup for "ha-957600-m02" (driver="hyperv")
	I0612 13:40:23.352425    7444 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 13:40:23.365043    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 13:40:23.365043    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:40:25.540820    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:40:25.540820    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:25.540897    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:40:28.136093    7444 main.go:141] libmachine: [stdout =====>] : 172.23.201.185
	
	I0612 13:40:28.136093    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:28.137010    7444 sshutil.go:53] new ssh client: &{IP:172.23.201.185 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m02\id_rsa Username:docker}
	I0612 13:40:28.254816    7444 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8897579s)
	I0612 13:40:28.268285    7444 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 13:40:28.275759    7444 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 13:40:28.275759    7444 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0612 13:40:28.276328    7444 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0612 13:40:28.277200    7444 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> 12802.pem in /etc/ssl/certs
	I0612 13:40:28.277200    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> /etc/ssl/certs/12802.pem
	I0612 13:40:28.290126    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 13:40:28.309175    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem --> /etc/ssl/certs/12802.pem (1708 bytes)
	I0612 13:40:28.360898    7444 start.go:296] duration metric: took 5.008458s for postStartSetup
	I0612 13:40:28.364121    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:40:30.542866    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:40:30.543140    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:30.543140    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:40:33.177427    7444 main.go:141] libmachine: [stdout =====>] : 172.23.201.185
	
	I0612 13:40:33.178226    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:33.178550    7444 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\config.json ...
	I0612 13:40:33.181088    7444 start.go:128] duration metric: took 2m6.6263947s to createHost
	I0612 13:40:33.181088    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:40:35.375898    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:40:35.375898    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:35.375898    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:40:37.910727    7444 main.go:141] libmachine: [stdout =====>] : 172.23.201.185
	
	I0612 13:40:37.911265    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:37.916437    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:40:37.917299    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.201.185 22 <nil> <nil>}
	I0612 13:40:37.917299    7444 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 13:40:38.063816    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718224838.060224079
	
	I0612 13:40:38.063879    7444 fix.go:216] guest clock: 1718224838.060224079
	I0612 13:40:38.063941    7444 fix.go:229] Guest: 2024-06-12 13:40:38.060224079 -0700 PDT Remote: 2024-06-12 13:40:33.1810882 -0700 PDT m=+336.723281701 (delta=4.879135879s)
	I0612 13:40:38.063941    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:40:40.186558    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:40:40.186558    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:40.187600    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:40:42.766690    7444 main.go:141] libmachine: [stdout =====>] : 172.23.201.185
	
	I0612 13:40:42.766690    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:42.773814    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:40:42.773896    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.201.185 22 <nil> <nil>}
	I0612 13:40:42.773896    7444 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1718224838
	I0612 13:40:42.929297    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jun 12 20:40:38 UTC 2024
	
	I0612 13:40:42.929459    7444 fix.go:236] clock set: Wed Jun 12 20:40:38 UTC 2024
	 (err=<nil>)
	I0612 13:40:42.929459    7444 start.go:83] releasing machines lock for "ha-957600-m02", held for 2m16.3747372s
	I0612 13:40:42.929708    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:40:45.128236    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:40:45.128236    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:45.128878    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:40:47.648541    7444 main.go:141] libmachine: [stdout =====>] : 172.23.201.185
	
	I0612 13:40:47.648576    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:47.651841    7444 out.go:177] * Found network options:
	I0612 13:40:47.654656    7444 out.go:177]   - NO_PROXY=172.23.203.104
	W0612 13:40:47.656873    7444 proxy.go:119] fail to check proxy env: Error ip not in block
	I0612 13:40:47.659003    7444 out.go:177]   - NO_PROXY=172.23.203.104
	W0612 13:40:47.663194    7444 proxy.go:119] fail to check proxy env: Error ip not in block
	W0612 13:40:47.665264    7444 proxy.go:119] fail to check proxy env: Error ip not in block
	I0612 13:40:47.667814    7444 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 13:40:47.667814    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:40:47.677154    7444 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0612 13:40:47.677154    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m02 ).state
	I0612 13:40:49.867090    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:40:49.867383    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:49.867481    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:40:49.902430    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:40:49.902985    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:49.902985    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 13:40:52.483377    7444 main.go:141] libmachine: [stdout =====>] : 172.23.201.185
	
	I0612 13:40:52.483377    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:52.483480    7444 sshutil.go:53] new ssh client: &{IP:172.23.201.185 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m02\id_rsa Username:docker}
	I0612 13:40:52.537949    7444 main.go:141] libmachine: [stdout =====>] : 172.23.201.185
	
	I0612 13:40:52.537995    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:40:52.537995    7444 sshutil.go:53] new ssh client: &{IP:172.23.201.185 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m02\id_rsa Username:docker}
	I0612 13:40:52.578098    7444 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.9009296s)
	W0612 13:40:52.578098    7444 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 13:40:52.591561    7444 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 13:40:52.665688    7444 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 13:40:52.665688    7444 start.go:494] detecting cgroup driver to use...
	I0612 13:40:52.665688    7444 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9978595s)
	I0612 13:40:52.665688    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 13:40:52.714276    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0612 13:40:52.748860    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0612 13:40:52.770275    7444 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0612 13:40:52.782268    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0612 13:40:52.816382    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0612 13:40:52.849636    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0612 13:40:52.882633    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0612 13:40:52.915694    7444 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 13:40:52.947948    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0612 13:40:52.980354    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0612 13:40:53.011933    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0612 13:40:53.051174    7444 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 13:40:53.083502    7444 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 13:40:53.114005    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:40:53.313109    7444 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0612 13:40:53.345887    7444 start.go:494] detecting cgroup driver to use...
	I0612 13:40:53.361415    7444 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0612 13:40:53.402850    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 13:40:53.437336    7444 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 13:40:53.475234    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 13:40:53.509504    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0612 13:40:53.544910    7444 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0612 13:40:53.602806    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0612 13:40:53.626302    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 13:40:53.669913    7444 ssh_runner.go:195] Run: which cri-dockerd
	I0612 13:40:53.689214    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0612 13:40:53.708977    7444 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0612 13:40:53.752588    7444 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0612 13:40:53.944076    7444 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0612 13:40:54.129393    7444 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0612 13:40:54.129535    7444 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0612 13:40:54.179641    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:40:54.378985    7444 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0612 13:40:56.909081    7444 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5300882s)
	I0612 13:40:56.921229    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0612 13:40:56.958759    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0612 13:40:56.993741    7444 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0612 13:40:57.191638    7444 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0612 13:40:57.406959    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:40:57.620110    7444 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0612 13:40:57.663638    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0612 13:40:57.699652    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:40:57.911354    7444 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0612 13:40:58.022124    7444 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0612 13:40:58.037792    7444 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0612 13:40:58.048542    7444 start.go:562] Will wait 60s for crictl version
	I0612 13:40:58.064979    7444 ssh_runner.go:195] Run: which crictl
	I0612 13:40:58.086892    7444 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 13:40:58.142076    7444 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0612 13:40:58.151630    7444 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0612 13:40:58.193697    7444 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0612 13:40:58.228083    7444 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0612 13:40:58.232348    7444 out.go:177]   - env NO_PROXY=172.23.203.104
	I0612 13:40:58.235567    7444 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0612 13:40:58.240807    7444 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0612 13:40:58.240807    7444 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0612 13:40:58.240807    7444 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0612 13:40:58.240807    7444 ip.go:207] Found interface: {Index:16 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:56:a0:18 Flags:up|broadcast|multicast|running}
	I0612 13:40:58.244245    7444 ip.go:210] interface addr: fe80::52c5:dd8:dd1e:a400/64
	I0612 13:40:58.244245    7444 ip.go:210] interface addr: 172.23.192.1/20
	I0612 13:40:58.257640    7444 ssh_runner.go:195] Run: grep 172.23.192.1	host.minikube.internal$ /etc/hosts
	I0612 13:40:58.264589    7444 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 13:40:58.288178    7444 mustload.go:65] Loading cluster: ha-957600
	I0612 13:40:58.288998    7444 config.go:182] Loaded profile config "ha-957600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 13:40:58.289701    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:41:00.427841    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:41:00.428195    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:41:00.428195    7444 host.go:66] Checking if "ha-957600" exists ...
	I0612 13:41:00.428991    7444 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600 for IP: 172.23.201.185
	I0612 13:41:00.428991    7444 certs.go:194] generating shared ca certs ...
	I0612 13:41:00.429066    7444 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:41:00.429712    7444 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0612 13:41:00.430235    7444 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0612 13:41:00.430530    7444 certs.go:256] generating profile certs ...
	I0612 13:41:00.431501    7444 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\client.key
	I0612 13:41:00.431617    7444 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key.7389266d
	I0612 13:41:00.431936    7444 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt.7389266d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.23.203.104 172.23.201.185 172.23.207.254]
	I0612 13:41:00.616300    7444 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt.7389266d ...
	I0612 13:41:00.617316    7444 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt.7389266d: {Name:mk5aa24280130c6f7302d45d6a80b585d49ec1d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:41:00.618835    7444 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key.7389266d ...
	I0612 13:41:00.618835    7444 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key.7389266d: {Name:mke68ba6ace12f6e280ee6403c498da322ea43b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:41:00.619255    7444 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt.7389266d -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt
	I0612 13:41:00.634246    7444 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key.7389266d -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key
	I0612 13:41:00.635317    7444 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.key
	I0612 13:41:00.635317    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0612 13:41:00.635837    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0612 13:41:00.636035    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0612 13:41:00.636338    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0612 13:41:00.636338    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0612 13:41:00.636745    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0612 13:41:00.636946    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0612 13:41:00.637304    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0612 13:41:00.637562    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem (1338 bytes)
	W0612 13:41:00.638159    7444 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280_empty.pem, impossibly tiny 0 bytes
	I0612 13:41:00.638211    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0612 13:41:00.638559    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0612 13:41:00.639272    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0612 13:41:00.639640    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0612 13:41:00.639904    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem (1708 bytes)
	I0612 13:41:00.640287    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0612 13:41:00.640540    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem -> /usr/share/ca-certificates/1280.pem
	I0612 13:41:00.640676    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> /usr/share/ca-certificates/12802.pem
	I0612 13:41:00.640880    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:41:02.782252    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:41:02.782252    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:41:02.782991    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:41:05.335972    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:41:05.335972    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:41:05.336973    7444 sshutil.go:53] new ssh client: &{IP:172.23.203.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\id_rsa Username:docker}
	I0612 13:41:05.441663    7444 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0612 13:41:05.449087    7444 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0612 13:41:05.481855    7444 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0612 13:41:05.488656    7444 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0612 13:41:05.518524    7444 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0612 13:41:05.525519    7444 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0612 13:41:05.559147    7444 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0612 13:41:05.565001    7444 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0612 13:41:05.594613    7444 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0612 13:41:05.606327    7444 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0612 13:41:05.646666    7444 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0612 13:41:05.653742    7444 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0612 13:41:05.670716    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 13:41:05.717982    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 13:41:05.761853    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 13:41:05.808882    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0612 13:41:05.856653    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0612 13:41:05.908010    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0612 13:41:05.971406    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 13:41:06.018165    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 13:41:06.064126    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 13:41:06.110977    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem --> /usr/share/ca-certificates/1280.pem (1338 bytes)
	I0612 13:41:06.157727    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem --> /usr/share/ca-certificates/12802.pem (1708 bytes)
	I0612 13:41:06.203026    7444 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0612 13:41:06.235173    7444 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0612 13:41:06.265600    7444 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0612 13:41:06.296598    7444 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0612 13:41:06.327103    7444 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0612 13:41:06.356794    7444 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0612 13:41:06.386571    7444 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0612 13:41:06.430610    7444 ssh_runner.go:195] Run: openssl version
	I0612 13:41:06.453328    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 13:41:06.486040    7444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 13:41:06.492410    7444 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:00 /usr/share/ca-certificates/minikubeCA.pem
	I0612 13:41:06.503892    7444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 13:41:06.524664    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 13:41:06.557700    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1280.pem && ln -fs /usr/share/ca-certificates/1280.pem /etc/ssl/certs/1280.pem"
	I0612 13:41:06.587045    7444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1280.pem
	I0612 13:41:06.594262    7444 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:15 /usr/share/ca-certificates/1280.pem
	I0612 13:41:06.605040    7444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1280.pem
	I0612 13:41:06.625597    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1280.pem /etc/ssl/certs/51391683.0"
	I0612 13:41:06.655894    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12802.pem && ln -fs /usr/share/ca-certificates/12802.pem /etc/ssl/certs/12802.pem"
	I0612 13:41:06.688148    7444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12802.pem
	I0612 13:41:06.695244    7444 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:15 /usr/share/ca-certificates/12802.pem
	I0612 13:41:06.707600    7444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12802.pem
	I0612 13:41:06.729264    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12802.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 13:41:06.758532    7444 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 13:41:06.765926    7444 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0612 13:41:06.766013    7444 kubeadm.go:928] updating node {m02 172.23.201.185 8443 v1.30.1 docker true true} ...
	I0612 13:41:06.766013    7444 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-957600-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.201.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-957600 Namespace:default APIServerHAVIP:172.23.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 13:41:06.766013    7444 kube-vip.go:115] generating kube-vip config ...
	I0612 13:41:06.777313    7444 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0612 13:41:06.803300    7444 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0612 13:41:06.803501    7444 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.23.207.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0612 13:41:06.814458    7444 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 13:41:06.831503    7444 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0612 13:41:06.842557    7444 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0612 13:41:06.862515    7444 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet
	I0612 13:41:06.862689    7444 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl
	I0612 13:41:06.862689    7444 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm
	I0612 13:41:08.046660    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0612 13:41:08.060659    7444 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0612 13:41:08.068688    7444 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0612 13:41:08.068897    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0612 13:41:12.816200    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0612 13:41:12.827223    7444 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0612 13:41:12.834953    7444 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0612 13:41:12.835061    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0612 13:41:15.978485    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 13:41:16.005729    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0612 13:41:16.019682    7444 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0612 13:41:16.025789    7444 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0612 13:41:16.025789    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0612 13:41:16.649889    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0612 13:41:16.894813    7444 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0612 13:41:16.928910    7444 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 13:41:16.962118    7444 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0612 13:41:17.007494    7444 ssh_runner.go:195] Run: grep 172.23.207.254	control-plane.minikube.internal$ /etc/hosts
	I0612 13:41:17.013839    7444 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.207.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 13:41:17.050446    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:41:17.253198    7444 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 13:41:17.288024    7444 host.go:66] Checking if "ha-957600" exists ...
	I0612 13:41:17.288621    7444 start.go:316] joinCluster: &{Name:ha-957600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-957600 Namespace:default APIServerHAVIP:172.23.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.203.104 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.201.185 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 13:41:17.288621    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0612 13:41:17.289217    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:41:19.430528    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:41:19.430771    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:41:19.430771    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:41:22.055986    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:41:22.055986    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:41:22.056641    7444 sshutil.go:53] new ssh client: &{IP:172.23.203.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\id_rsa Username:docker}
	I0612 13:41:22.353070    7444 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0643514s)
	I0612 13:41:22.353190    7444 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.23.201.185 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0612 13:41:22.353268    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vawtol.6y8lqv0tes4381yl --discovery-token-ca-cert-hash sha256:10c04e0412ada9d72a46398cbb6ecb6de5efcad2d747fb615b7e984406c55dc5 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-957600-m02 --control-plane --apiserver-advertise-address=172.23.201.185 --apiserver-bind-port=8443"
	I0612 13:42:03.415394    7444 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vawtol.6y8lqv0tes4381yl --discovery-token-ca-cert-hash sha256:10c04e0412ada9d72a46398cbb6ecb6de5efcad2d747fb615b7e984406c55dc5 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-957600-m02 --control-plane --apiserver-advertise-address=172.23.201.185 --apiserver-bind-port=8443": (41.0619126s)
	I0612 13:42:03.415394    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0612 13:42:04.209571    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-957600-m02 minikube.k8s.io/updated_at=2024_06_12T13_42_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97 minikube.k8s.io/name=ha-957600 minikube.k8s.io/primary=false
	I0612 13:42:04.382411    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-957600-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0612 13:42:04.571472    7444 start.go:318] duration metric: took 47.2827092s to joinCluster
	I0612 13:42:04.571472    7444 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.23.201.185 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0612 13:42:04.574475    7444 out.go:177] * Verifying Kubernetes components...
	I0612 13:42:04.572398    7444 config.go:182] Loaded profile config "ha-957600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 13:42:04.592293    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:42:04.941232    7444 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 13:42:04.974500    7444 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 13:42:04.975377    7444 kapi.go:59] client config for ha-957600: &rest.Config{Host:"https://172.23.207.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-957600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-957600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x288e1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0612 13:42:04.975513    7444 kubeadm.go:477] Overriding stale ClientConfig host https://172.23.207.254:8443 with https://172.23.203.104:8443
	I0612 13:42:04.975826    7444 node_ready.go:35] waiting up to 6m0s for node "ha-957600-m02" to be "Ready" ...
	I0612 13:42:04.976433    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:04.976433    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:04.976433    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:04.976433    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:04.991070    7444 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0612 13:42:05.490850    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:05.491133    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:05.491245    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:05.491245    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:05.498996    7444 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 13:42:05.981445    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:05.981525    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:05.981561    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:05.981561    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:05.987710    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:42:06.490383    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:06.490556    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:06.490556    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:06.490556    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:06.495874    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:42:06.977248    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:06.977248    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:06.977248    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:06.977248    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:06.982070    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:42:06.983399    7444 node_ready.go:53] node "ha-957600-m02" has status "Ready":"False"
	I0612 13:42:07.483227    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:07.483288    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:07.483288    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:07.483288    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:07.487907    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:42:07.987262    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:07.987262    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:07.987414    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:07.987414    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:07.992696    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:42:08.478978    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:08.478978    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:08.478978    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:08.478978    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:08.484676    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:42:08.986243    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:08.986299    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:08.986299    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:08.986365    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:08.997023    7444 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0612 13:42:08.998160    7444 node_ready.go:53] node "ha-957600-m02" has status "Ready":"False"
	I0612 13:42:09.490893    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:09.491102    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:09.491102    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:09.491102    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:09.496789    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:42:09.980148    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:09.980148    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:09.980148    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:09.980148    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:09.984769    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:42:10.489192    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:10.489192    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:10.489192    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:10.489192    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:10.495410    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:42:10.981540    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:10.981704    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:10.981778    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:10.981778    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:10.988429    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:42:11.489595    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:11.489878    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:11.489878    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:11.489878    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:11.496263    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:42:11.497333    7444 node_ready.go:53] node "ha-957600-m02" has status "Ready":"False"
	I0612 13:42:11.978857    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:11.978857    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:11.978857    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:11.978857    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:11.984810    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:42:12.480844    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:12.481058    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:12.481058    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:12.481058    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:12.486227    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:42:12.987027    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:12.987280    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:12.987280    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:12.987280    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:12.992999    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:42:13.491883    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:13.491948    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:13.491948    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:13.491948    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:13.497419    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:42:13.498102    7444 node_ready.go:53] node "ha-957600-m02" has status "Ready":"False"
	I0612 13:42:13.978307    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:13.978383    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:13.978383    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:13.978383    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:13.983846    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:42:14.480233    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:14.480233    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:14.480313    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:14.480313    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:14.489558    7444 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0612 13:42:14.984706    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:14.984835    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:14.984835    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:14.984835    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:14.991112    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:42:15.485486    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:15.485486    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:15.485580    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:15.485580    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:15.490533    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:42:15.986107    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:15.986198    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:15.986198    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:15.986198    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:15.991198    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:42:15.992562    7444 node_ready.go:53] node "ha-957600-m02" has status "Ready":"False"
	I0612 13:42:16.488385    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:16.488385    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:16.488385    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:16.488385    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:16.494100    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:42:16.990300    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:16.990493    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:16.990493    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:16.990493    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:16.998502    7444 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0612 13:42:17.491946    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:17.491946    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:17.491946    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:17.491946    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:17.496589    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:42:17.979380    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:17.979380    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:17.979380    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:17.979380    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:17.985395    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:42:18.482659    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:18.482659    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:18.482659    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:18.482659    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:18.487250    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:42:18.488959    7444 node_ready.go:49] node "ha-957600-m02" has status "Ready":"True"
	I0612 13:42:18.488959    7444 node_ready.go:38] duration metric: took 13.512555s for node "ha-957600-m02" to be "Ready" ...
	I0612 13:42:18.488959    7444 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 13:42:18.489124    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods
	I0612 13:42:18.489187    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:18.489187    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:18.489187    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:18.496466    7444 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 13:42:18.507757    7444 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fvjdp" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:18.507757    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fvjdp
	I0612 13:42:18.507757    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:18.507757    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:18.507757    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:18.512135    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:42:18.513026    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:42:18.513026    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:18.513026    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:18.513026    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:18.516376    7444 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 13:42:18.518151    7444 pod_ready.go:92] pod "coredns-7db6d8ff4d-fvjdp" in "kube-system" namespace has status "Ready":"True"
	I0612 13:42:18.518246    7444 pod_ready.go:81] duration metric: took 10.4883ms for pod "coredns-7db6d8ff4d-fvjdp" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:18.518246    7444 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wv2wz" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:18.518395    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-wv2wz
	I0612 13:42:18.518434    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:18.518434    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:18.518434    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:18.527176    7444 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0612 13:42:18.528899    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:42:18.528899    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:18.528899    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:18.528899    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:18.533184    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:42:18.534087    7444 pod_ready.go:92] pod "coredns-7db6d8ff4d-wv2wz" in "kube-system" namespace has status "Ready":"True"
	I0612 13:42:18.534194    7444 pod_ready.go:81] duration metric: took 15.8416ms for pod "coredns-7db6d8ff4d-wv2wz" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:18.534194    7444 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-957600" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:18.534194    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-957600
	I0612 13:42:18.534194    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:18.534349    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:18.534349    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:18.537536    7444 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 13:42:18.538806    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:42:18.538806    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:18.538806    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:18.538864    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:18.542190    7444 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 13:42:18.543236    7444 pod_ready.go:92] pod "etcd-ha-957600" in "kube-system" namespace has status "Ready":"True"
	I0612 13:42:18.543236    7444 pod_ready.go:81] duration metric: took 9.0417ms for pod "etcd-ha-957600" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:18.543236    7444 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-957600-m02" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:18.543437    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-957600-m02
	I0612 13:42:18.543513    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:18.543513    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:18.543513    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:18.548293    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:42:18.549284    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:18.549284    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:18.549349    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:18.549349    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:18.552637    7444 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 13:42:18.553964    7444 pod_ready.go:92] pod "etcd-ha-957600-m02" in "kube-system" namespace has status "Ready":"True"
	I0612 13:42:18.553964    7444 pod_ready.go:81] duration metric: took 10.7277ms for pod "etcd-ha-957600-m02" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:18.553964    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-957600" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:18.683596    7444 request.go:629] Waited for 129.2887ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957600
	I0612 13:42:18.683837    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957600
	I0612 13:42:18.683837    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:18.683837    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:18.683837    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:18.689706    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:42:18.889218    7444 request.go:629] Waited for 198.4046ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:42:18.889619    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:42:18.889619    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:18.889690    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:18.889690    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:18.894806    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:42:18.896026    7444 pod_ready.go:92] pod "kube-apiserver-ha-957600" in "kube-system" namespace has status "Ready":"True"
	I0612 13:42:18.896026    7444 pod_ready.go:81] duration metric: took 342.0616ms for pod "kube-apiserver-ha-957600" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:18.896026    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-957600-m02" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:19.094154    7444 request.go:629] Waited for 197.5781ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957600-m02
	I0612 13:42:19.094626    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957600-m02
	I0612 13:42:19.094664    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:19.094664    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:19.094664    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:19.103248    7444 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0612 13:42:19.284668    7444 request.go:629] Waited for 180.4139ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:19.284945    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:19.284945    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:19.284945    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:19.284945    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:19.293798    7444 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0612 13:42:19.293798    7444 pod_ready.go:92] pod "kube-apiserver-ha-957600-m02" in "kube-system" namespace has status "Ready":"True"
	I0612 13:42:19.293798    7444 pod_ready.go:81] duration metric: took 397.7707ms for pod "kube-apiserver-ha-957600-m02" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:19.294683    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-957600" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:19.490907    7444 request.go:629] Waited for 196.2234ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957600
	I0612 13:42:19.490907    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957600
	I0612 13:42:19.490907    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:19.490907    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:19.490907    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:19.499887    7444 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0612 13:42:19.686951    7444 request.go:629] Waited for 185.8814ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:42:19.687196    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:42:19.687196    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:19.687196    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:19.687196    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:19.694066    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:42:19.694814    7444 pod_ready.go:92] pod "kube-controller-manager-ha-957600" in "kube-system" namespace has status "Ready":"True"
	I0612 13:42:19.694814    7444 pod_ready.go:81] duration metric: took 400.1304ms for pod "kube-controller-manager-ha-957600" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:19.694814    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-957600-m02" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:19.892518    7444 request.go:629] Waited for 197.5558ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957600-m02
	I0612 13:42:19.892732    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957600-m02
	I0612 13:42:19.892732    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:19.892732    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:19.892732    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:19.898860    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:42:20.082771    7444 request.go:629] Waited for 182.5025ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:20.082875    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:20.082875    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:20.082875    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:20.082875    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:20.088067    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:42:20.089403    7444 pod_ready.go:92] pod "kube-controller-manager-ha-957600-m02" in "kube-system" namespace has status "Ready":"True"
	I0612 13:42:20.089488    7444 pod_ready.go:81] duration metric: took 394.6723ms for pod "kube-controller-manager-ha-957600-m02" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:20.089488    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j29r7" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:20.285117    7444 request.go:629] Waited for 195.3458ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j29r7
	I0612 13:42:20.285376    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j29r7
	I0612 13:42:20.285376    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:20.285376    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:20.285376    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:20.291100    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:42:20.487541    7444 request.go:629] Waited for 194.2676ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:20.487670    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:20.487670    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:20.487845    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:20.487845    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:20.492030    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:42:20.493938    7444 pod_ready.go:92] pod "kube-proxy-j29r7" in "kube-system" namespace has status "Ready":"True"
	I0612 13:42:20.494541    7444 pod_ready.go:81] duration metric: took 405.0009ms for pod "kube-proxy-j29r7" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:20.494541    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z94m6" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:20.690157    7444 request.go:629] Waited for 195.4742ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z94m6
	I0612 13:42:20.690414    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z94m6
	I0612 13:42:20.690414    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:20.690414    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:20.690414    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:20.696109    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:42:20.893067    7444 request.go:629] Waited for 195.3401ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:42:20.893402    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:42:20.893470    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:20.893470    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:20.893470    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:20.910684    7444 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0612 13:42:20.911655    7444 pod_ready.go:92] pod "kube-proxy-z94m6" in "kube-system" namespace has status "Ready":"True"
	I0612 13:42:20.911655    7444 pod_ready.go:81] duration metric: took 417.1123ms for pod "kube-proxy-z94m6" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:20.911655    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-957600" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:21.093301    7444 request.go:629] Waited for 181.3269ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957600
	I0612 13:42:21.093450    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957600
	I0612 13:42:21.093450    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:21.093450    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:21.093450    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:21.104317    7444 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0612 13:42:21.282964    7444 request.go:629] Waited for 177.529ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:42:21.282964    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:42:21.282964    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:21.282964    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:21.282964    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:21.289000    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:42:21.289900    7444 pod_ready.go:92] pod "kube-scheduler-ha-957600" in "kube-system" namespace has status "Ready":"True"
	I0612 13:42:21.289985    7444 pod_ready.go:81] duration metric: took 378.3287ms for pod "kube-scheduler-ha-957600" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:21.289985    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-957600-m02" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:21.484773    7444 request.go:629] Waited for 194.6909ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957600-m02
	I0612 13:42:21.485299    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957600-m02
	I0612 13:42:21.485299    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:21.485299    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:21.485299    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:21.492772    7444 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 13:42:21.688684    7444 request.go:629] Waited for 194.5103ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:21.689010    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:42:21.689010    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:21.689092    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:21.689092    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:21.698560    7444 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0612 13:42:21.700002    7444 pod_ready.go:92] pod "kube-scheduler-ha-957600-m02" in "kube-system" namespace has status "Ready":"True"
	I0612 13:42:21.700033    7444 pod_ready.go:81] duration metric: took 410.047ms for pod "kube-scheduler-ha-957600-m02" in "kube-system" namespace to be "Ready" ...
	I0612 13:42:21.700033    7444 pod_ready.go:38] duration metric: took 3.2109776s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 13:42:21.700115    7444 api_server.go:52] waiting for apiserver process to appear ...
	I0612 13:42:21.712208    7444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 13:42:21.743028    7444 api_server.go:72] duration metric: took 17.1715046s to wait for apiserver process to appear ...
	I0612 13:42:21.743192    7444 api_server.go:88] waiting for apiserver healthz status ...
	I0612 13:42:21.743192    7444 api_server.go:253] Checking apiserver healthz at https://172.23.203.104:8443/healthz ...
	I0612 13:42:21.752211    7444 api_server.go:279] https://172.23.203.104:8443/healthz returned 200:
	ok
	I0612 13:42:21.753277    7444 round_trippers.go:463] GET https://172.23.203.104:8443/version
	I0612 13:42:21.753277    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:21.753277    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:21.753277    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:21.755281    7444 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 13:42:21.755759    7444 api_server.go:141] control plane version: v1.30.1
	I0612 13:42:21.755859    7444 api_server.go:131] duration metric: took 12.6333ms to wait for apiserver health ...
	I0612 13:42:21.755892    7444 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 13:42:21.891895    7444 request.go:629] Waited for 135.9031ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods
	I0612 13:42:21.891895    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods
	I0612 13:42:21.891895    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:21.892245    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:21.892245    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:21.902090    7444 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0612 13:42:21.909995    7444 system_pods.go:59] 17 kube-system pods found
	I0612 13:42:21.910053    7444 system_pods.go:61] "coredns-7db6d8ff4d-fvjdp" [6cb59655-8c1c-493a-89ee-b4ae9ceacdbb] Running
	I0612 13:42:21.910053    7444 system_pods.go:61] "coredns-7db6d8ff4d-wv2wz" [2c2ce90f-b175-4ea7-a936-878c326f66af] Running
	I0612 13:42:21.910131    7444 system_pods.go:61] "etcd-ha-957600" [7cce4e7e-9ea8-48f3-b7f5-dc4c445cfe5d] Running
	I0612 13:42:21.910131    7444 system_pods.go:61] "etcd-ha-957600-m02" [fa3c8b8b-4744-4a4f-8025-44485b3a7a5f] Running
	I0612 13:42:21.910131    7444 system_pods.go:61] "kindnet-54xjp" [cf89e4c7-5d54-48fb-9a94-76364e2f3d3c] Running
	I0612 13:42:21.910131    7444 system_pods.go:61] "kindnet-gdk8g" [0eac7aaf-2341-4580-92d1-ea700cf2fa0f] Running
	I0612 13:42:21.910131    7444 system_pods.go:61] "kube-apiserver-ha-957600" [14343c48-f30d-430c-81e0-24b68835b4fd] Running
	I0612 13:42:21.910131    7444 system_pods.go:61] "kube-apiserver-ha-957600-m02" [3ba7d864-6b01-4152-8027-2fe8e0d5d6bb] Running
	I0612 13:42:21.910131    7444 system_pods.go:61] "kube-controller-manager-ha-957600" [3cc0e64f-a1d7-4062-b78a-b9de960cf935] Running
	I0612 13:42:21.910131    7444 system_pods.go:61] "kube-controller-manager-ha-957600-m02" [fb9dba99-8e76-4c2f-b427-de3fee7d0300] Running
	I0612 13:42:21.910208    7444 system_pods.go:61] "kube-proxy-j29r7" [e87fe1ac-6577-44e3-af8f-c28e878fea08] Running
	I0612 13:42:21.910208    7444 system_pods.go:61] "kube-proxy-z94m6" [cdd33d94-1a1c-4038-aeda-0c6e1d68e559] Running
	I0612 13:42:21.910229    7444 system_pods.go:61] "kube-scheduler-ha-957600" [28ad5883-d593-42a7-952f-0038a7bb25d6] Running
	I0612 13:42:21.910229    7444 system_pods.go:61] "kube-scheduler-ha-957600-m02" [d3a27ea9-a208-4278-8a50-332971e8a78c] Running
	I0612 13:42:21.910229    7444 system_pods.go:61] "kube-vip-ha-957600" [2780187a-2cd6-43da-93bd-73c0dc959228] Running
	I0612 13:42:21.910229    7444 system_pods.go:61] "kube-vip-ha-957600-m02" [0908b051-1096-41ae-b457-36b2162ae907] Running
	I0612 13:42:21.910229    7444 system_pods.go:61] "storage-provisioner" [9a5d025e-c240-4084-a1bd-1db96161d3b3] Running
	I0612 13:42:21.910229    7444 system_pods.go:74] duration metric: took 154.3369ms to wait for pod list to return data ...
	I0612 13:42:21.910229    7444 default_sa.go:34] waiting for default service account to be created ...
	I0612 13:42:22.094744    7444 request.go:629] Waited for 184.0501ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/default/serviceaccounts
	I0612 13:42:22.094744    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/default/serviceaccounts
	I0612 13:42:22.094744    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:22.094744    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:22.094744    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:22.101352    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:42:22.102554    7444 default_sa.go:45] found service account: "default"
	I0612 13:42:22.102655    7444 default_sa.go:55] duration metric: took 192.3608ms for default service account to be created ...
	I0612 13:42:22.102655    7444 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 13:42:22.296448    7444 request.go:629] Waited for 193.4167ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods
	I0612 13:42:22.296678    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods
	I0612 13:42:22.296863    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:22.296863    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:22.296955    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:22.305411    7444 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0612 13:42:22.312789    7444 system_pods.go:86] 17 kube-system pods found
	I0612 13:42:22.312789    7444 system_pods.go:89] "coredns-7db6d8ff4d-fvjdp" [6cb59655-8c1c-493a-89ee-b4ae9ceacdbb] Running
	I0612 13:42:22.312789    7444 system_pods.go:89] "coredns-7db6d8ff4d-wv2wz" [2c2ce90f-b175-4ea7-a936-878c326f66af] Running
	I0612 13:42:22.312789    7444 system_pods.go:89] "etcd-ha-957600" [7cce4e7e-9ea8-48f3-b7f5-dc4c445cfe5d] Running
	I0612 13:42:22.312789    7444 system_pods.go:89] "etcd-ha-957600-m02" [fa3c8b8b-4744-4a4f-8025-44485b3a7a5f] Running
	I0612 13:42:22.312789    7444 system_pods.go:89] "kindnet-54xjp" [cf89e4c7-5d54-48fb-9a94-76364e2f3d3c] Running
	I0612 13:42:22.312789    7444 system_pods.go:89] "kindnet-gdk8g" [0eac7aaf-2341-4580-92d1-ea700cf2fa0f] Running
	I0612 13:42:22.312789    7444 system_pods.go:89] "kube-apiserver-ha-957600" [14343c48-f30d-430c-81e0-24b68835b4fd] Running
	I0612 13:42:22.312789    7444 system_pods.go:89] "kube-apiserver-ha-957600-m02" [3ba7d864-6b01-4152-8027-2fe8e0d5d6bb] Running
	I0612 13:42:22.312789    7444 system_pods.go:89] "kube-controller-manager-ha-957600" [3cc0e64f-a1d7-4062-b78a-b9de960cf935] Running
	I0612 13:42:22.312789    7444 system_pods.go:89] "kube-controller-manager-ha-957600-m02" [fb9dba99-8e76-4c2f-b427-de3fee7d0300] Running
	I0612 13:42:22.312789    7444 system_pods.go:89] "kube-proxy-j29r7" [e87fe1ac-6577-44e3-af8f-c28e878fea08] Running
	I0612 13:42:22.312789    7444 system_pods.go:89] "kube-proxy-z94m6" [cdd33d94-1a1c-4038-aeda-0c6e1d68e559] Running
	I0612 13:42:22.312789    7444 system_pods.go:89] "kube-scheduler-ha-957600" [28ad5883-d593-42a7-952f-0038a7bb25d6] Running
	I0612 13:42:22.313384    7444 system_pods.go:89] "kube-scheduler-ha-957600-m02" [d3a27ea9-a208-4278-8a50-332971e8a78c] Running
	I0612 13:42:22.313384    7444 system_pods.go:89] "kube-vip-ha-957600" [2780187a-2cd6-43da-93bd-73c0dc959228] Running
	I0612 13:42:22.313384    7444 system_pods.go:89] "kube-vip-ha-957600-m02" [0908b051-1096-41ae-b457-36b2162ae907] Running
	I0612 13:42:22.313384    7444 system_pods.go:89] "storage-provisioner" [9a5d025e-c240-4084-a1bd-1db96161d3b3] Running
	I0612 13:42:22.313465    7444 system_pods.go:126] duration metric: took 210.8099ms to wait for k8s-apps to be running ...
	I0612 13:42:22.313465    7444 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 13:42:22.324004    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 13:42:22.349235    7444 system_svc.go:56] duration metric: took 35.7694ms WaitForService to wait for kubelet
	I0612 13:42:22.349392    7444 kubeadm.go:576] duration metric: took 17.7778671s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 13:42:22.349392    7444 node_conditions.go:102] verifying NodePressure condition ...
	I0612 13:42:22.483364    7444 request.go:629] Waited for 133.7586ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes
	I0612 13:42:22.483546    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes
	I0612 13:42:22.483546    7444 round_trippers.go:469] Request Headers:
	I0612 13:42:22.483546    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:42:22.483696    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:42:22.489070    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:42:22.489844    7444 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 13:42:22.489844    7444 node_conditions.go:123] node cpu capacity is 2
	I0612 13:42:22.489844    7444 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 13:42:22.489844    7444 node_conditions.go:123] node cpu capacity is 2
	I0612 13:42:22.489844    7444 node_conditions.go:105] duration metric: took 140.4516ms to run NodePressure ...
	I0612 13:42:22.489844    7444 start.go:240] waiting for startup goroutines ...
	I0612 13:42:22.489844    7444 start.go:254] writing updated cluster config ...
	I0612 13:42:22.494035    7444 out.go:177] 
	I0612 13:42:22.507717    7444 config.go:182] Loaded profile config "ha-957600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 13:42:22.508367    7444 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\config.json ...
	I0612 13:42:22.516908    7444 out.go:177] * Starting "ha-957600-m03" control-plane node in "ha-957600" cluster
	I0612 13:42:22.521852    7444 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0612 13:42:22.521852    7444 cache.go:56] Caching tarball of preloaded images
	I0612 13:42:22.522341    7444 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0612 13:42:22.522655    7444 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0612 13:42:22.522886    7444 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\config.json ...
	I0612 13:42:22.524188    7444 start.go:360] acquireMachinesLock for ha-957600-m03: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 13:42:22.525169    7444 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-957600-m03"
	I0612 13:42:22.525169    7444 start.go:93] Provisioning new machine with config: &{Name:ha-957600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-957600 Namespace:default APIServerHAVIP:172.23.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.203.104 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.201.185 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0612 13:42:22.525169    7444 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0612 13:42:22.531990    7444 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0612 13:42:22.531990    7444 start.go:159] libmachine.API.Create for "ha-957600" (driver="hyperv")
	I0612 13:42:22.532753    7444 client.go:168] LocalClient.Create starting
	I0612 13:42:22.533006    7444 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0612 13:42:22.533483    7444 main.go:141] libmachine: Decoding PEM data...
	I0612 13:42:22.533544    7444 main.go:141] libmachine: Parsing certificate...
	I0612 13:42:22.533720    7444 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0612 13:42:22.533720    7444 main.go:141] libmachine: Decoding PEM data...
	I0612 13:42:22.533720    7444 main.go:141] libmachine: Parsing certificate...
	I0612 13:42:22.533720    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0612 13:42:24.505639    7444 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0612 13:42:24.505639    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:42:24.505639    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0612 13:42:26.265969    7444 main.go:141] libmachine: [stdout =====>] : False
	
	I0612 13:42:26.266443    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:42:26.266443    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0612 13:42:27.781494    7444 main.go:141] libmachine: [stdout =====>] : True
	
	I0612 13:42:27.781494    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:42:27.782429    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0612 13:42:31.603434    7444 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0612 13:42:31.603434    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:42:31.605468    7444 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso...
	I0612 13:42:32.076997    7444 main.go:141] libmachine: Creating SSH key...
	I0612 13:42:32.179558    7444 main.go:141] libmachine: Creating VM...
	I0612 13:42:32.179558    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0612 13:42:35.145942    7444 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0612 13:42:35.145942    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:42:35.145942    7444 main.go:141] libmachine: Using switch "Default Switch"
	I0612 13:42:35.147132    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0612 13:42:36.934906    7444 main.go:141] libmachine: [stdout =====>] : True
	
	I0612 13:42:36.934906    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:42:36.935744    7444 main.go:141] libmachine: Creating VHD
	I0612 13:42:36.935744    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0612 13:42:40.771426    7444 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 52CA2E3A-85EC-4D22-8835-02E0CFA6A387
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0612 13:42:40.771491    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:42:40.771491    7444 main.go:141] libmachine: Writing magic tar header
	I0612 13:42:40.771491    7444 main.go:141] libmachine: Writing SSH key tar header
	I0612 13:42:40.780544    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0612 13:42:44.096351    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:42:44.096351    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:42:44.097204    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m03\disk.vhd' -SizeBytes 20000MB
	I0612 13:42:46.644613    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:42:46.644613    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:42:46.644613    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-957600-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0612 13:42:50.361360    7444 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-957600-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0612 13:42:50.361360    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:42:50.362337    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-957600-m03 -DynamicMemoryEnabled $false
	I0612 13:42:52.625998    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:42:52.625998    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:42:52.625998    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-957600-m03 -Count 2
	I0612 13:42:54.835871    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:42:54.836882    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:42:54.836996    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-957600-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m03\boot2docker.iso'
	I0612 13:42:57.506138    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:42:57.506236    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:42:57.506322    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-957600-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m03\disk.vhd'
	I0612 13:43:00.270226    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:43:00.271052    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:00.271052    7444 main.go:141] libmachine: Starting VM...
	I0612 13:43:00.271052    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-957600-m03
	I0612 13:43:03.433514    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:43:03.433514    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:03.433514    7444 main.go:141] libmachine: Waiting for host to start...
	I0612 13:43:03.433514    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:43:05.782081    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:43:05.782081    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:05.782081    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:43:08.442413    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:43:08.442507    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:09.445627    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:43:11.759213    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:43:11.759213    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:11.759457    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:43:14.393086    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:43:14.393086    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:15.401413    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:43:17.670508    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:43:17.670508    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:17.670508    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:43:20.243279    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:43:20.243279    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:21.252210    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:43:23.501968    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:43:23.501968    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:23.502169    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:43:26.130542    7444 main.go:141] libmachine: [stdout =====>] : 
	I0612 13:43:26.130542    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:27.145341    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:43:29.415229    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:43:29.415457    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:29.415557    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:43:32.057468    7444 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 13:43:32.057468    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:32.058327    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:43:34.261556    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:43:34.261556    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:34.262555    7444 machine.go:94] provisionDockerMachine start ...
	I0612 13:43:34.262619    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:43:36.456341    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:43:36.457304    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:36.457304    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:43:39.069760    7444 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 13:43:39.069760    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:39.076699    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:43:39.076868    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.207.166 22 <nil> <nil>}
	I0612 13:43:39.076868    7444 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 13:43:39.193656    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 13:43:39.193656    7444 buildroot.go:166] provisioning hostname "ha-957600-m03"
	I0612 13:43:39.193813    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:43:41.372809    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:43:41.372809    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:41.373348    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:43:43.987887    7444 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 13:43:43.988118    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:43.993368    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:43:43.993759    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.207.166 22 <nil> <nil>}
	I0612 13:43:43.993759    7444 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-957600-m03 && echo "ha-957600-m03" | sudo tee /etc/hostname
	I0612 13:43:44.141641    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-957600-m03
	
	I0612 13:43:44.141745    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:43:46.301068    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:43:46.301068    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:46.301306    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:43:48.885920    7444 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 13:43:48.885920    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:48.895607    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:43:48.895607    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.207.166 22 <nil> <nil>}
	I0612 13:43:48.895607    7444 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-957600-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-957600-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-957600-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 13:43:49.033469    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 13:43:49.033469    7444 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0612 13:43:49.033469    7444 buildroot.go:174] setting up certificates
	I0612 13:43:49.033469    7444 provision.go:84] configureAuth start
	I0612 13:43:49.034029    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:43:51.225358    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:43:51.226309    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:51.226309    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:43:53.821502    7444 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 13:43:53.821502    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:53.821596    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:43:56.006989    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:43:56.008019    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:56.008130    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:43:58.653093    7444 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 13:43:58.653423    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:43:58.653423    7444 provision.go:143] copyHostCerts
	I0612 13:43:58.653588    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0612 13:43:58.654174    7444 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0612 13:43:58.654308    7444 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0612 13:43:58.654930    7444 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0612 13:43:58.656568    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0612 13:43:58.656969    7444 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0612 13:43:58.656969    7444 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0612 13:43:58.657511    7444 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0612 13:43:58.658868    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0612 13:43:58.659300    7444 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0612 13:43:58.659300    7444 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0612 13:43:58.659795    7444 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0612 13:43:58.660750    7444 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-957600-m03 san=[127.0.0.1 172.23.207.166 ha-957600-m03 localhost minikube]
	I0612 13:43:58.872014    7444 provision.go:177] copyRemoteCerts
	I0612 13:43:58.885906    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 13:43:58.886119    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:44:01.066180    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:44:01.066180    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:01.066336    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:44:03.658867    7444 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 13:44:03.659817    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:03.659987    7444 sshutil.go:53] new ssh client: &{IP:172.23.207.166 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m03\id_rsa Username:docker}
	I0612 13:44:03.767203    7444 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8812834s)
	I0612 13:44:03.767203    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0612 13:44:03.768115    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 13:44:03.817387    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0612 13:44:03.817755    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0612 13:44:03.862826    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0612 13:44:03.863127    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0612 13:44:03.912402    7444 provision.go:87] duration metric: took 14.8788897s to configureAuth
	I0612 13:44:03.912402    7444 buildroot.go:189] setting minikube options for container-runtime
	I0612 13:44:03.913300    7444 config.go:182] Loaded profile config "ha-957600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 13:44:03.913497    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:44:06.070941    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:44:06.070998    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:06.070998    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:44:08.709965    7444 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 13:44:08.709965    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:08.716687    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:44:08.717262    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.207.166 22 <nil> <nil>}
	I0612 13:44:08.717262    7444 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0612 13:44:08.832615    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0612 13:44:08.832615    7444 buildroot.go:70] root file system type: tmpfs
	I0612 13:44:08.832615    7444 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0612 13:44:08.832615    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:44:11.021634    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:44:11.021634    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:11.022437    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:44:13.684358    7444 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 13:44:13.684358    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:13.690418    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:44:13.691099    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.207.166 22 <nil> <nil>}
	I0612 13:44:13.691099    7444 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.23.203.104"
	Environment="NO_PROXY=172.23.203.104,172.23.201.185"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0612 13:44:13.845756    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.23.203.104
	Environment=NO_PROXY=172.23.203.104,172.23.201.185
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0612 13:44:13.845756    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:44:16.086043    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:44:16.086536    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:16.086642    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:44:18.749436    7444 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 13:44:18.749436    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:18.755392    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:44:18.755392    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.207.166 22 <nil> <nil>}
	I0612 13:44:18.755392    7444 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0612 13:44:20.985330    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0612 13:44:20.985430    7444 machine.go:97] duration metric: took 46.7227401s to provisionDockerMachine
	I0612 13:44:20.985430    7444 client.go:171] duration metric: took 1m58.4523263s to LocalClient.Create
	I0612 13:44:20.985430    7444 start.go:167] duration metric: took 1m58.4530899s to libmachine.API.Create "ha-957600"
	I0612 13:44:20.985430    7444 start.go:293] postStartSetup for "ha-957600-m03" (driver="hyperv")
	I0612 13:44:20.985588    7444 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 13:44:20.997134    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 13:44:20.997134    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:44:23.223240    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:44:23.224008    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:23.224008    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:44:25.860480    7444 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 13:44:25.860480    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:25.860857    7444 sshutil.go:53] new ssh client: &{IP:172.23.207.166 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m03\id_rsa Username:docker}
	I0612 13:44:25.979668    7444 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.982496s)
	I0612 13:44:25.993145    7444 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 13:44:26.001167    7444 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 13:44:26.001167    7444 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0612 13:44:26.001360    7444 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0612 13:44:26.002216    7444 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> 12802.pem in /etc/ssl/certs
	I0612 13:44:26.002356    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> /etc/ssl/certs/12802.pem
	I0612 13:44:26.013568    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 13:44:26.032719    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem --> /etc/ssl/certs/12802.pem (1708 bytes)
	I0612 13:44:26.081340    7444 start.go:296] duration metric: took 5.0958945s for postStartSetup
	I0612 13:44:26.084511    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:44:28.342570    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:44:28.343609    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:28.343609    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:44:30.949745    7444 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 13:44:30.950009    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:30.950082    7444 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\config.json ...
	I0612 13:44:30.953837    7444 start.go:128] duration metric: took 2m8.4282883s to createHost
	I0612 13:44:30.953940    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:44:33.174218    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:44:33.174218    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:33.174218    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:44:35.809220    7444 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 13:44:35.809220    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:35.815411    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:44:35.815939    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.207.166 22 <nil> <nil>}
	I0612 13:44:35.816109    7444 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 13:44:35.940580    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718225075.920684081
	
	I0612 13:44:35.940580    7444 fix.go:216] guest clock: 1718225075.920684081
	I0612 13:44:35.940580    7444 fix.go:229] Guest: 2024-06-12 13:44:35.920684081 -0700 PDT Remote: 2024-06-12 13:44:30.9539401 -0700 PDT m=+574.495426101 (delta=4.966743981s)
	I0612 13:44:35.940580    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:44:38.145837    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:44:38.145837    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:38.145837    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:44:40.742046    7444 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 13:44:40.743040    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:40.743696    7444 main.go:141] libmachine: Using SSH client type: native
	I0612 13:44:40.747905    7444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.207.166 22 <nil> <nil>}
	I0612 13:44:40.747940    7444 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1718225075
	I0612 13:44:40.892181    7444 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jun 12 20:44:35 UTC 2024
	
	I0612 13:44:40.892181    7444 fix.go:236] clock set: Wed Jun 12 20:44:35 UTC 2024
	 (err=<nil>)
	I0612 13:44:40.892181    7444 start.go:83] releasing machines lock for "ha-957600-m03", held for 2m18.3666034s
	I0612 13:44:40.892181    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:44:43.089516    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:44:43.090341    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:43.090402    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:44:45.711791    7444 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 13:44:45.712707    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:45.715499    7444 out.go:177] * Found network options:
	I0612 13:44:45.721808    7444 out.go:177]   - NO_PROXY=172.23.203.104,172.23.201.185
	W0612 13:44:45.724707    7444 proxy.go:119] fail to check proxy env: Error ip not in block
	W0612 13:44:45.724707    7444 proxy.go:119] fail to check proxy env: Error ip not in block
	I0612 13:44:45.729901    7444 out.go:177]   - NO_PROXY=172.23.203.104,172.23.201.185
	W0612 13:44:45.732339    7444 proxy.go:119] fail to check proxy env: Error ip not in block
	W0612 13:44:45.732339    7444 proxy.go:119] fail to check proxy env: Error ip not in block
	W0612 13:44:45.733351    7444 proxy.go:119] fail to check proxy env: Error ip not in block
	W0612 13:44:45.733351    7444 proxy.go:119] fail to check proxy env: Error ip not in block
	I0612 13:44:45.736757    7444 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 13:44:45.736927    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:44:45.752938    7444 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0612 13:44:45.752938    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600-m03 ).state
	I0612 13:44:48.002880    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:44:48.002880    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:48.002880    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:44:48.002880    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:48.002880    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:44:48.002880    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 13:44:50.879026    7444 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 13:44:50.879026    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:50.879718    7444 sshutil.go:53] new ssh client: &{IP:172.23.207.166 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m03\id_rsa Username:docker}
	I0612 13:44:50.904562    7444 main.go:141] libmachine: [stdout =====>] : 172.23.207.166
	
	I0612 13:44:50.904562    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:50.905562    7444 sshutil.go:53] new ssh client: &{IP:172.23.207.166 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600-m03\id_rsa Username:docker}
	I0612 13:44:51.048771    7444 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3119985s)
	I0612 13:44:51.048771    7444 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2958182s)
	W0612 13:44:51.048886    7444 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 13:44:51.063457    7444 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 13:44:51.096537    7444 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 13:44:51.096621    7444 start.go:494] detecting cgroup driver to use...
	I0612 13:44:51.096908    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 13:44:51.151011    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0612 13:44:51.186637    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0612 13:44:51.208422    7444 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0612 13:44:51.221177    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0612 13:44:51.255135    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0612 13:44:51.290804    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0612 13:44:51.328740    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0612 13:44:51.364911    7444 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 13:44:51.396840    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0612 13:44:51.429986    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0612 13:44:51.463908    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0612 13:44:51.498030    7444 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 13:44:51.533753    7444 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 13:44:51.569682    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:44:51.798092    7444 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0612 13:44:51.832181    7444 start.go:494] detecting cgroup driver to use...
	I0612 13:44:51.846261    7444 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0612 13:44:51.887558    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 13:44:51.928349    7444 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 13:44:51.972441    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 13:44:52.013595    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0612 13:44:52.051544    7444 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0612 13:44:52.114982    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0612 13:44:52.142046    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 13:44:52.189762    7444 ssh_runner.go:195] Run: which cri-dockerd
	I0612 13:44:52.208763    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0612 13:44:52.230310    7444 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0612 13:44:52.279514    7444 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0612 13:44:52.480453    7444 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0612 13:44:52.662708    7444 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0612 13:44:52.663717    7444 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0612 13:44:52.704709    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:44:52.919478    7444 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0612 13:44:55.453209    7444 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5336437s)
	I0612 13:44:55.463979    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0612 13:44:55.497551    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0612 13:44:55.532784    7444 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0612 13:44:55.744138    7444 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0612 13:44:55.947812    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:44:56.148634    7444 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0612 13:44:56.190414    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0612 13:44:56.226411    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:44:56.429336    7444 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0612 13:44:56.534902    7444 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0612 13:44:56.545119    7444 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0612 13:44:56.554119    7444 start.go:562] Will wait 60s for crictl version
	I0612 13:44:56.565760    7444 ssh_runner.go:195] Run: which crictl
	I0612 13:44:56.583915    7444 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 13:44:56.638185    7444 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0612 13:44:56.647338    7444 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0612 13:44:56.697196    7444 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0612 13:44:56.734658    7444 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0612 13:44:56.738131    7444 out.go:177]   - env NO_PROXY=172.23.203.104
	I0612 13:44:56.741132    7444 out.go:177]   - env NO_PROXY=172.23.203.104,172.23.201.185
	I0612 13:44:56.745128    7444 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0612 13:44:56.750131    7444 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0612 13:44:56.750131    7444 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0612 13:44:56.750131    7444 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0612 13:44:56.750131    7444 ip.go:207] Found interface: {Index:16 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:56:a0:18 Flags:up|broadcast|multicast|running}
	I0612 13:44:56.753128    7444 ip.go:210] interface addr: fe80::52c5:dd8:dd1e:a400/64
	I0612 13:44:56.753564    7444 ip.go:210] interface addr: 172.23.192.1/20
	I0612 13:44:56.764980    7444 ssh_runner.go:195] Run: grep 172.23.192.1	host.minikube.internal$ /etc/hosts
	I0612 13:44:56.771569    7444 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 13:44:56.792581    7444 mustload.go:65] Loading cluster: ha-957600
	I0612 13:44:56.793614    7444 config.go:182] Loaded profile config "ha-957600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 13:44:56.793614    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:44:58.932880    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:44:58.933852    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:44:58.933905    7444 host.go:66] Checking if "ha-957600" exists ...
	I0612 13:44:58.934659    7444 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600 for IP: 172.23.207.166
	I0612 13:44:58.934659    7444 certs.go:194] generating shared ca certs ...
	I0612 13:44:58.934659    7444 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:44:58.935391    7444 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0612 13:44:58.935758    7444 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0612 13:44:58.936018    7444 certs.go:256] generating profile certs ...
	I0612 13:44:58.936801    7444 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\client.key
	I0612 13:44:58.936978    7444 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key.d3d55635
	I0612 13:44:58.937059    7444 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt.d3d55635 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.23.203.104 172.23.201.185 172.23.207.166 172.23.207.254]
	I0612 13:44:59.233230    7444 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt.d3d55635 ...
	I0612 13:44:59.233230    7444 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt.d3d55635: {Name:mkf1927f6658f26a3c5c8cdc9941635a8db96e59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:44:59.235333    7444 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key.d3d55635 ...
	I0612 13:44:59.235333    7444 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key.d3d55635: {Name:mk6d15b5e0913ab7adc90bd98bcfcea07d9da2f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 13:44:59.235806    7444 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt.d3d55635 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt
	I0612 13:44:59.247779    7444 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key.d3d55635 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key
	I0612 13:44:59.248774    7444 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.key
	I0612 13:44:59.248774    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0612 13:44:59.248774    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0612 13:44:59.249788    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0612 13:44:59.249788    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0612 13:44:59.249788    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0612 13:44:59.249788    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0612 13:44:59.249788    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0612 13:44:59.249788    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0612 13:44:59.250780    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem (1338 bytes)
	W0612 13:44:59.250780    7444 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280_empty.pem, impossibly tiny 0 bytes
	I0612 13:44:59.250780    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0612 13:44:59.251779    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0612 13:44:59.251779    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0612 13:44:59.251779    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0612 13:44:59.251779    7444 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem (1708 bytes)
	I0612 13:44:59.252783    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0612 13:44:59.252783    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem -> /usr/share/ca-certificates/1280.pem
	I0612 13:44:59.252783    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> /usr/share/ca-certificates/12802.pem
	I0612 13:44:59.252783    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:45:01.413194    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:45:01.413698    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:45:01.413763    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:45:04.024837    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:45:04.024944    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:45:04.025121    7444 sshutil.go:53] new ssh client: &{IP:172.23.203.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\id_rsa Username:docker}
	I0612 13:45:04.124504    7444 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0612 13:45:04.133038    7444 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0612 13:45:04.166281    7444 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0612 13:45:04.174802    7444 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0612 13:45:04.209286    7444 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0612 13:45:04.217132    7444 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0612 13:45:04.251940    7444 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0612 13:45:04.258557    7444 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0612 13:45:04.291966    7444 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0612 13:45:04.300233    7444 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0612 13:45:04.334996    7444 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0612 13:45:04.345497    7444 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0612 13:45:04.365672    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 13:45:04.415413    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 13:45:04.475608    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 13:45:04.526997    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0612 13:45:04.574840    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0612 13:45:04.621602    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0612 13:45:04.668465    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 13:45:04.716131    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-957600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 13:45:04.764518    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 13:45:04.814660    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem --> /usr/share/ca-certificates/1280.pem (1338 bytes)
	I0612 13:45:04.867959    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem --> /usr/share/ca-certificates/12802.pem (1708 bytes)
	I0612 13:45:04.917870    7444 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0612 13:45:04.953989    7444 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0612 13:45:04.985129    7444 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0612 13:45:05.018721    7444 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0612 13:45:05.056744    7444 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0612 13:45:05.089655    7444 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0612 13:45:05.122286    7444 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0612 13:45:05.169175    7444 ssh_runner.go:195] Run: openssl version
	I0612 13:45:05.189686    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12802.pem && ln -fs /usr/share/ca-certificates/12802.pem /etc/ssl/certs/12802.pem"
	I0612 13:45:05.222188    7444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12802.pem
	I0612 13:45:05.229721    7444 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:15 /usr/share/ca-certificates/12802.pem
	I0612 13:45:05.241233    7444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12802.pem
	I0612 13:45:05.263860    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12802.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 13:45:05.300596    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 13:45:05.339228    7444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 13:45:05.348291    7444 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:00 /usr/share/ca-certificates/minikubeCA.pem
	I0612 13:45:05.360064    7444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 13:45:05.380455    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 13:45:05.416609    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1280.pem && ln -fs /usr/share/ca-certificates/1280.pem /etc/ssl/certs/1280.pem"
	I0612 13:45:05.447603    7444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1280.pem
	I0612 13:45:05.455115    7444 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:15 /usr/share/ca-certificates/1280.pem
	I0612 13:45:05.466956    7444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1280.pem
	I0612 13:45:05.487535    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1280.pem /etc/ssl/certs/51391683.0"
	I0612 13:45:05.521678    7444 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 13:45:05.530161    7444 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0612 13:45:05.530559    7444 kubeadm.go:928] updating node {m03 172.23.207.166 8443 v1.30.1 docker true true} ...
	I0612 13:45:05.530779    7444 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-957600-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.207.166
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-957600 Namespace:default APIServerHAVIP:172.23.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 13:45:05.530852    7444 kube-vip.go:115] generating kube-vip config ...
	I0612 13:45:05.542993    7444 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0612 13:45:05.568545    7444 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0612 13:45:05.568636    7444 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.23.207.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0612 13:45:05.580451    7444 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 13:45:05.600316    7444 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0612 13:45:05.611447    7444 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0612 13:45:05.631049    7444 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0612 13:45:05.631049    7444 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0612 13:45:05.631049    7444 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0612 13:45:05.631686    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0612 13:45:05.631900    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0612 13:45:05.649865    7444 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0612 13:45:05.649865    7444 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0612 13:45:05.650879    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 13:45:05.656675    7444 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0612 13:45:05.656938    7444 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0612 13:45:05.657024    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0612 13:45:05.657024    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0612 13:45:05.710249    7444 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0612 13:45:05.726439    7444 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0612 13:45:05.812861    7444 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0612 13:45:05.812861    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0612 13:45:06.986918    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0612 13:45:07.006251    7444 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0612 13:45:07.038927    7444 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 13:45:07.071682    7444 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0612 13:45:07.120495    7444 ssh_runner.go:195] Run: grep 172.23.207.254	control-plane.minikube.internal$ /etc/hosts
	I0612 13:45:07.128339    7444 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.207.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 13:45:07.162394    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:45:07.374619    7444 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 13:45:07.403267    7444 host.go:66] Checking if "ha-957600" exists ...
	I0612 13:45:07.404276    7444 start.go:316] joinCluster: &{Name:ha-957600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-957600 Namespace:default APIServerHAVIP:172.23.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.203.104 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.201.185 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.23.207.166 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 13:45:07.404276    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0612 13:45:07.404276    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-957600 ).state
	I0612 13:45:09.602485    7444 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 13:45:09.602485    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:45:09.603060    7444 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-957600 ).networkadapters[0]).ipaddresses[0]
	I0612 13:45:12.254466    7444 main.go:141] libmachine: [stdout =====>] : 172.23.203.104
	
	I0612 13:45:12.254466    7444 main.go:141] libmachine: [stderr =====>] : 
	I0612 13:45:12.254701    7444 sshutil.go:53] new ssh client: &{IP:172.23.203.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-957600\id_rsa Username:docker}
	I0612 13:45:12.498007    7444 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0937159s)
	I0612 13:45:12.498179    7444 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.23.207.166 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0612 13:45:12.498275    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cl2ns1.38fwkh9or36p9019 --discovery-token-ca-cert-hash sha256:10c04e0412ada9d72a46398cbb6ecb6de5efcad2d747fb615b7e984406c55dc5 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-957600-m03 --control-plane --apiserver-advertise-address=172.23.207.166 --apiserver-bind-port=8443"
	I0612 13:45:57.585999    7444 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cl2ns1.38fwkh9or36p9019 --discovery-token-ca-cert-hash sha256:10c04e0412ada9d72a46398cbb6ecb6de5efcad2d747fb615b7e984406c55dc5 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-957600-m03 --control-plane --apiserver-advertise-address=172.23.207.166 --apiserver-bind-port=8443": (45.0875391s)
	I0612 13:45:57.586543    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0612 13:45:58.319522    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-957600-m03 minikube.k8s.io/updated_at=2024_06_12T13_45_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97 minikube.k8s.io/name=ha-957600 minikube.k8s.io/primary=false
	I0612 13:45:58.513740    7444 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-957600-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0612 13:45:58.690827    7444 start.go:318] duration metric: took 51.2863972s to joinCluster
	I0612 13:45:58.690827    7444 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.23.207.166 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0612 13:45:58.694138    7444 out.go:177] * Verifying Kubernetes components...
	I0612 13:45:58.691924    7444 config.go:182] Loaded profile config "ha-957600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 13:45:58.712882    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 13:45:59.128830    7444 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 13:45:59.165218    7444 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 13:45:59.166327    7444 kapi.go:59] client config for ha-957600: &rest.Config{Host:"https://172.23.207.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-957600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-957600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x288e1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0612 13:45:59.166524    7444 kubeadm.go:477] Overriding stale ClientConfig host https://172.23.207.254:8443 with https://172.23.203.104:8443
	I0612 13:45:59.167439    7444 node_ready.go:35] waiting up to 6m0s for node "ha-957600-m03" to be "Ready" ...
	I0612 13:45:59.167559    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:45:59.167633    7444 round_trippers.go:469] Request Headers:
	I0612 13:45:59.167633    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:45:59.167706    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:45:59.184001    7444 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0612 13:45:59.678184    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:45:59.678184    7444 round_trippers.go:469] Request Headers:
	I0612 13:45:59.678184    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:45:59.678184    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:45:59.682764    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:00.183622    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:00.183622    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:00.183622    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:00.183622    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:00.189219    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:46:00.675130    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:00.675208    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:00.675208    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:00.675208    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:00.679692    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:01.170938    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:01.170938    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:01.170938    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:01.170938    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:01.178425    7444 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 13:46:01.179603    7444 node_ready.go:53] node "ha-957600-m03" has status "Ready":"False"
	I0612 13:46:01.677952    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:01.677952    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:01.677952    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:01.677952    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:01.683119    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:46:02.169729    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:02.169859    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:02.169859    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:02.169859    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:02.177857    7444 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 13:46:02.678909    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:02.678909    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:02.678909    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:02.678909    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:02.683908    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:03.171709    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:03.171776    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:03.171776    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:03.171776    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:03.178388    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:46:03.675746    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:03.675815    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:03.675815    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:03.675815    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:03.681355    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:46:03.682523    7444 node_ready.go:53] node "ha-957600-m03" has status "Ready":"False"
	I0612 13:46:04.168582    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:04.168648    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:04.168746    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:04.168746    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:04.173601    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:04.672923    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:04.673021    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:04.673021    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:04.673021    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:04.682829    7444 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0612 13:46:05.180712    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:05.180825    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:05.180825    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:05.180825    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:05.266098    7444 round_trippers.go:574] Response Status: 200 OK in 85 milliseconds
	I0612 13:46:05.668954    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:05.669077    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:05.669077    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:05.669077    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:05.673361    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:06.169738    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:06.169738    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:06.169738    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:06.169858    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:06.175965    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:46:06.177093    7444 node_ready.go:53] node "ha-957600-m03" has status "Ready":"False"
	I0612 13:46:06.673056    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:06.673121    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:06.673201    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:06.673201    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:06.678472    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:07.177358    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:07.177436    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:07.177436    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:07.177436    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:07.181951    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:07.679683    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:07.679746    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:07.679746    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:07.679746    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:07.686804    7444 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 13:46:08.180984    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:08.181047    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:08.181047    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:08.181047    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:08.188042    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:46:08.188816    7444 node_ready.go:53] node "ha-957600-m03" has status "Ready":"False"
	I0612 13:46:08.681926    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:08.681926    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:08.681926    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:08.681926    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:08.686470    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:09.170979    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:09.170979    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:09.170979    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:09.171148    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:09.176416    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:46:09.668864    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:09.668864    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:09.668864    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:09.668864    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:09.673570    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:10.170909    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:10.171011    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:10.171011    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:10.171011    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:10.177551    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:46:10.671392    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:10.671503    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:10.671503    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:10.671503    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:10.676811    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:46:10.678386    7444 node_ready.go:53] node "ha-957600-m03" has status "Ready":"False"
	I0612 13:46:11.174535    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:11.174535    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:11.174535    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:11.174535    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:11.181183    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:46:11.678579    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:11.678847    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:11.678847    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:11.678847    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:11.683357    7444 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 13:46:11.683995    7444 node_ready.go:49] node "ha-957600-m03" has status "Ready":"True"
	I0612 13:46:11.683995    7444 node_ready.go:38] duration metric: took 12.5165181s for node "ha-957600-m03" to be "Ready" ...
	I0612 13:46:11.684069    7444 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 13:46:11.684173    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods
	I0612 13:46:11.684173    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:11.684173    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:11.684173    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:11.699875    7444 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0612 13:46:11.711431    7444 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fvjdp" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:11.711431    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fvjdp
	I0612 13:46:11.711431    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:11.711431    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:11.711431    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:11.717200    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:46:11.717899    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:46:11.717899    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:11.717899    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:11.717899    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:11.722191    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:11.723704    7444 pod_ready.go:92] pod "coredns-7db6d8ff4d-fvjdp" in "kube-system" namespace has status "Ready":"True"
	I0612 13:46:11.723704    7444 pod_ready.go:81] duration metric: took 12.273ms for pod "coredns-7db6d8ff4d-fvjdp" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:11.723791    7444 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wv2wz" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:11.723860    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-wv2wz
	I0612 13:46:11.723860    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:11.723968    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:11.723968    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:11.728523    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:11.729467    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:46:11.729467    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:11.729467    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:11.729467    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:11.740076    7444 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0612 13:46:11.740765    7444 pod_ready.go:92] pod "coredns-7db6d8ff4d-wv2wz" in "kube-system" namespace has status "Ready":"True"
	I0612 13:46:11.740872    7444 pod_ready.go:81] duration metric: took 17.0806ms for pod "coredns-7db6d8ff4d-wv2wz" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:11.740872    7444 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-957600" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:11.740872    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-957600
	I0612 13:46:11.740872    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:11.740872    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:11.740872    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:11.749119    7444 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0612 13:46:11.750029    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:46:11.750029    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:11.750029    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:11.750029    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:11.770947    7444 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0612 13:46:11.771749    7444 pod_ready.go:92] pod "etcd-ha-957600" in "kube-system" namespace has status "Ready":"True"
	I0612 13:46:11.771749    7444 pod_ready.go:81] duration metric: took 30.8769ms for pod "etcd-ha-957600" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:11.771749    7444 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-957600-m02" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:11.771749    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-957600-m02
	I0612 13:46:11.771749    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:11.771749    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:11.771749    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:11.776158    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:11.778089    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:46:11.778210    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:11.778210    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:11.778241    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:11.782259    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:11.782306    7444 pod_ready.go:92] pod "etcd-ha-957600-m02" in "kube-system" namespace has status "Ready":"True"
	I0612 13:46:11.782843    7444 pod_ready.go:81] duration metric: took 11.0937ms for pod "etcd-ha-957600-m02" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:11.782913    7444 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-957600-m03" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:11.880796    7444 request.go:629] Waited for 97.6496ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-957600-m03
	I0612 13:46:11.880796    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-957600-m03
	I0612 13:46:11.881016    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:11.881016    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:11.881016    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:11.887211    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:46:12.085984    7444 request.go:629] Waited for 197.7915ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:12.086191    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:12.086191    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:12.086191    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:12.086191    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:12.094251    7444 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0612 13:46:12.094963    7444 pod_ready.go:92] pod "etcd-ha-957600-m03" in "kube-system" namespace has status "Ready":"True"
	I0612 13:46:12.094963    7444 pod_ready.go:81] duration metric: took 312.0488ms for pod "etcd-ha-957600-m03" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:12.094963    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-957600" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:12.290270    7444 request.go:629] Waited for 195.0296ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957600
	I0612 13:46:12.290459    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957600
	I0612 13:46:12.290502    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:12.290502    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:12.290502    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:12.295340    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:12.478960    7444 request.go:629] Waited for 182.0165ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:46:12.479256    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:46:12.479256    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:12.479256    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:12.479256    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:12.484160    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:12.484160    7444 pod_ready.go:92] pod "kube-apiserver-ha-957600" in "kube-system" namespace has status "Ready":"True"
	I0612 13:46:12.485607    7444 pod_ready.go:81] duration metric: took 390.6428ms for pod "kube-apiserver-ha-957600" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:12.485607    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-957600-m02" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:12.680806    7444 request.go:629] Waited for 194.7748ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957600-m02
	I0612 13:46:12.681124    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957600-m02
	I0612 13:46:12.681216    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:12.681216    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:12.681216    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:12.686906    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:46:12.885832    7444 request.go:629] Waited for 197.7334ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:46:12.886319    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:46:12.886417    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:12.886417    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:12.886417    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:12.895316    7444 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0612 13:46:12.896682    7444 pod_ready.go:92] pod "kube-apiserver-ha-957600-m02" in "kube-system" namespace has status "Ready":"True"
	I0612 13:46:12.896682    7444 pod_ready.go:81] duration metric: took 411.0737ms for pod "kube-apiserver-ha-957600-m02" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:12.896758    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-957600-m03" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:13.089726    7444 request.go:629] Waited for 192.9056ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957600-m03
	I0612 13:46:13.090163    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-957600-m03
	I0612 13:46:13.090282    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:13.090282    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:13.090282    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:13.095727    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:13.278834    7444 request.go:629] Waited for 182.21ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:13.279172    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:13.279273    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:13.279273    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:13.279318    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:13.284930    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:46:13.285979    7444 pod_ready.go:92] pod "kube-apiserver-ha-957600-m03" in "kube-system" namespace has status "Ready":"True"
	I0612 13:46:13.286178    7444 pod_ready.go:81] duration metric: took 389.4194ms for pod "kube-apiserver-ha-957600-m03" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:13.286350    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-957600" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:13.481234    7444 request.go:629] Waited for 194.6781ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957600
	I0612 13:46:13.481435    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957600
	I0612 13:46:13.481435    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:13.481435    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:13.481504    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:13.487519    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:46:13.682618    7444 request.go:629] Waited for 193.1799ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:46:13.682618    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:46:13.682618    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:13.682618    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:13.682618    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:13.688862    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:46:13.689922    7444 pod_ready.go:92] pod "kube-controller-manager-ha-957600" in "kube-system" namespace has status "Ready":"True"
	I0612 13:46:13.690038    7444 pod_ready.go:81] duration metric: took 403.6866ms for pod "kube-controller-manager-ha-957600" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:13.690038    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-957600-m02" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:13.886594    7444 request.go:629] Waited for 196.3295ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957600-m02
	I0612 13:46:13.886824    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957600-m02
	I0612 13:46:13.886824    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:13.886824    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:13.886824    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:13.893769    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:46:14.090060    7444 request.go:629] Waited for 194.1022ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:46:14.090188    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:46:14.090188    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:14.090188    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:14.090188    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:14.096660    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:46:14.097391    7444 pod_ready.go:92] pod "kube-controller-manager-ha-957600-m02" in "kube-system" namespace has status "Ready":"True"
	I0612 13:46:14.097466    7444 pod_ready.go:81] duration metric: took 407.4266ms for pod "kube-controller-manager-ha-957600-m02" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:14.097466    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-957600-m03" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:14.292644    7444 request.go:629] Waited for 194.9513ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957600-m03
	I0612 13:46:14.292780    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-957600-m03
	I0612 13:46:14.292780    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:14.292780    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:14.292863    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:14.297509    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:14.479224    7444 request.go:629] Waited for 180.4095ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:14.479448    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:14.479448    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:14.479511    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:14.479511    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:14.486222    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:46:14.486985    7444 pod_ready.go:92] pod "kube-controller-manager-ha-957600-m03" in "kube-system" namespace has status "Ready":"True"
	I0612 13:46:14.486985    7444 pod_ready.go:81] duration metric: took 389.5183ms for pod "kube-controller-manager-ha-957600-m03" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:14.486985    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9qwpr" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:14.683188    7444 request.go:629] Waited for 195.9858ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9qwpr
	I0612 13:46:14.683332    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9qwpr
	I0612 13:46:14.683383    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:14.683383    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:14.683383    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:14.690996    7444 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 13:46:14.886991    7444 request.go:629] Waited for 194.7114ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:14.887072    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:14.887072    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:14.887072    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:14.887072    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:14.893767    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:46:14.894909    7444 pod_ready.go:92] pod "kube-proxy-9qwpr" in "kube-system" namespace has status "Ready":"True"
	I0612 13:46:14.894909    7444 pod_ready.go:81] duration metric: took 407.9231ms for pod "kube-proxy-9qwpr" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:14.894909    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j29r7" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:15.089446    7444 request.go:629] Waited for 194.0928ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j29r7
	I0612 13:46:15.089748    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j29r7
	I0612 13:46:15.089748    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:15.089748    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:15.089748    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:15.094357    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:15.293855    7444 request.go:629] Waited for 198.6093ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:46:15.293855    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:46:15.294002    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:15.294002    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:15.294002    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:15.300054    7444 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 13:46:15.301280    7444 pod_ready.go:92] pod "kube-proxy-j29r7" in "kube-system" namespace has status "Ready":"True"
	I0612 13:46:15.301280    7444 pod_ready.go:81] duration metric: took 406.2496ms for pod "kube-proxy-j29r7" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:15.301280    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z94m6" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:15.481124    7444 request.go:629] Waited for 179.5004ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z94m6
	I0612 13:46:15.481258    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z94m6
	I0612 13:46:15.481258    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:15.481258    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:15.481393    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:15.486724    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:46:15.684545    7444 request.go:629] Waited for 196.5477ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:46:15.684545    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:46:15.684545    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:15.684545    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:15.684545    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:15.694885    7444 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0612 13:46:15.695702    7444 pod_ready.go:92] pod "kube-proxy-z94m6" in "kube-system" namespace has status "Ready":"True"
	I0612 13:46:15.695803    7444 pod_ready.go:81] duration metric: took 394.5213ms for pod "kube-proxy-z94m6" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:15.695851    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-957600" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:15.886688    7444 request.go:629] Waited for 190.4819ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957600
	I0612 13:46:15.886943    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957600
	I0612 13:46:15.886943    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:15.886943    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:15.886943    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:15.895660    7444 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0612 13:46:16.088389    7444 request.go:629] Waited for 191.6675ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:46:16.088480    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600
	I0612 13:46:16.088480    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:16.088578    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:16.088578    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:16.093851    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:46:16.094483    7444 pod_ready.go:92] pod "kube-scheduler-ha-957600" in "kube-system" namespace has status "Ready":"True"
	I0612 13:46:16.094483    7444 pod_ready.go:81] duration metric: took 398.6304ms for pod "kube-scheduler-ha-957600" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:16.094483    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-957600-m02" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:16.291115    7444 request.go:629] Waited for 196.4571ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957600-m02
	I0612 13:46:16.291242    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957600-m02
	I0612 13:46:16.291379    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:16.291450    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:16.291450    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:16.296721    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:46:16.478901    7444 request.go:629] Waited for 180.4988ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:46:16.479375    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m02
	I0612 13:46:16.479375    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:16.479436    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:16.479465    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:16.484429    7444 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 13:46:16.486046    7444 pod_ready.go:92] pod "kube-scheduler-ha-957600-m02" in "kube-system" namespace has status "Ready":"True"
	I0612 13:46:16.486117    7444 pod_ready.go:81] duration metric: took 391.6327ms for pod "kube-scheduler-ha-957600-m02" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:16.486117    7444 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-957600-m03" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:16.684926    7444 request.go:629] Waited for 198.4825ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957600-m03
	I0612 13:46:16.685040    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-957600-m03
	I0612 13:46:16.685040    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:16.685040    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:16.685040    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:16.690478    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:46:16.888825    7444 request.go:629] Waited for 196.8388ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:16.888913    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes/ha-957600-m03
	I0612 13:46:16.888913    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:16.888913    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:16.888913    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:16.898975    7444 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0612 13:46:16.899763    7444 pod_ready.go:92] pod "kube-scheduler-ha-957600-m03" in "kube-system" namespace has status "Ready":"True"
	I0612 13:46:16.899763    7444 pod_ready.go:81] duration metric: took 413.5668ms for pod "kube-scheduler-ha-957600-m03" in "kube-system" namespace to be "Ready" ...
	I0612 13:46:16.899852    7444 pod_ready.go:38] duration metric: took 5.2156785s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 13:46:16.899852    7444 api_server.go:52] waiting for apiserver process to appear ...
	I0612 13:46:16.913160    7444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 13:46:16.943463    7444 api_server.go:72] duration metric: took 18.2525813s to wait for apiserver process to appear ...
	I0612 13:46:16.943548    7444 api_server.go:88] waiting for apiserver healthz status ...
	I0612 13:46:16.943548    7444 api_server.go:253] Checking apiserver healthz at https://172.23.203.104:8443/healthz ...
	I0612 13:46:16.949872    7444 api_server.go:279] https://172.23.203.104:8443/healthz returned 200:
	ok
	I0612 13:46:16.950247    7444 round_trippers.go:463] GET https://172.23.203.104:8443/version
	I0612 13:46:16.950247    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:16.950247    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:16.950247    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:16.951964    7444 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 13:46:16.952386    7444 api_server.go:141] control plane version: v1.30.1
	I0612 13:46:16.952386    7444 api_server.go:131] duration metric: took 8.8384ms to wait for apiserver health ...
	I0612 13:46:16.952386    7444 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 13:46:17.089393    7444 request.go:629] Waited for 136.7393ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods
	I0612 13:46:17.089393    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods
	I0612 13:46:17.089393    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:17.089393    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:17.089393    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:17.101970    7444 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0612 13:46:17.113512    7444 system_pods.go:59] 24 kube-system pods found
	I0612 13:46:17.113512    7444 system_pods.go:61] "coredns-7db6d8ff4d-fvjdp" [6cb59655-8c1c-493a-89ee-b4ae9ceacdbb] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "coredns-7db6d8ff4d-wv2wz" [2c2ce90f-b175-4ea7-a936-878c326f66af] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "etcd-ha-957600" [7cce4e7e-9ea8-48f3-b7f5-dc4c445cfe5d] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "etcd-ha-957600-m02" [fa3c8b8b-4744-4a4f-8025-44485b3a7a5f] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "etcd-ha-957600-m03" [e9fc9fc8-f655-49e6-98aa-5a772b66992d] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "kindnet-54xjp" [cf89e4c7-5d54-48fb-9a94-76364e2f3d3c] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "kindnet-gdk8g" [0eac7aaf-2341-4580-92d1-ea700cf2fa0f] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "kindnet-mwpsf" [c191d87f-04fd-4e6c-b2fe-97e4c4e9db23] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "kube-apiserver-ha-957600" [14343c48-f30d-430c-81e0-24b68835b4fd] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "kube-apiserver-ha-957600-m02" [3ba7d864-6b01-4152-8027-2fe8e0d5d6bb] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "kube-apiserver-ha-957600-m03" [4ad9ac9f-d682-431a-8a91-42e27c853f2b] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "kube-controller-manager-ha-957600" [3cc0e64f-a1d7-4062-b78a-b9de960cf935] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "kube-controller-manager-ha-957600-m02" [fb9dba99-8e76-4c2f-b427-de3fee7d0300] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "kube-controller-manager-ha-957600-m03" [54e543ef-8ef5-43e4-b669-71eba6c9b629] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "kube-proxy-9qwpr" [424d5d60-76b3-47ce-bc8f-75f61fccdd9a] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "kube-proxy-j29r7" [e87fe1ac-6577-44e3-af8f-c28e878fea08] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "kube-proxy-z94m6" [cdd33d94-1a1c-4038-aeda-0c6e1d68e559] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "kube-scheduler-ha-957600" [28ad5883-d593-42a7-952f-0038a7bb25d6] Running
	I0612 13:46:17.113512    7444 system_pods.go:61] "kube-scheduler-ha-957600-m02" [d3a27ea9-a208-4278-8a50-332971e8a78c] Running
	I0612 13:46:17.113837    7444 system_pods.go:61] "kube-scheduler-ha-957600-m03" [3288ad97-c220-44a6-bde1-a329e7dab060] Running
	I0612 13:46:17.113941    7444 system_pods.go:61] "kube-vip-ha-957600" [2780187a-2cd6-43da-93bd-73c0dc959228] Running
	I0612 13:46:17.113941    7444 system_pods.go:61] "kube-vip-ha-957600-m02" [0908b051-1096-41ae-b457-36b2162ae907] Running
	I0612 13:46:17.113941    7444 system_pods.go:61] "kube-vip-ha-957600-m03" [30ec686b-5763-4ed8-b4d7-a7eab172d0d8] Running
	I0612 13:46:17.113941    7444 system_pods.go:61] "storage-provisioner" [9a5d025e-c240-4084-a1bd-1db96161d3b3] Running
	I0612 13:46:17.113941    7444 system_pods.go:74] duration metric: took 161.5545ms to wait for pod list to return data ...
	I0612 13:46:17.113941    7444 default_sa.go:34] waiting for default service account to be created ...
	I0612 13:46:17.283083    7444 request.go:629] Waited for 169.1418ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/default/serviceaccounts
	I0612 13:46:17.283083    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/default/serviceaccounts
	I0612 13:46:17.283083    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:17.283083    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:17.283083    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:17.289074    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:46:17.289431    7444 default_sa.go:45] found service account: "default"
	I0612 13:46:17.289431    7444 default_sa.go:55] duration metric: took 175.4889ms for default service account to be created ...
	I0612 13:46:17.289431    7444 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 13:46:17.487128    7444 request.go:629] Waited for 197.5015ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods
	I0612 13:46:17.487267    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/namespaces/kube-system/pods
	I0612 13:46:17.487267    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:17.487267    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:17.487462    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:17.500302    7444 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0612 13:46:17.511078    7444 system_pods.go:86] 24 kube-system pods found
	I0612 13:46:17.511078    7444 system_pods.go:89] "coredns-7db6d8ff4d-fvjdp" [6cb59655-8c1c-493a-89ee-b4ae9ceacdbb] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "coredns-7db6d8ff4d-wv2wz" [2c2ce90f-b175-4ea7-a936-878c326f66af] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "etcd-ha-957600" [7cce4e7e-9ea8-48f3-b7f5-dc4c445cfe5d] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "etcd-ha-957600-m02" [fa3c8b8b-4744-4a4f-8025-44485b3a7a5f] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "etcd-ha-957600-m03" [e9fc9fc8-f655-49e6-98aa-5a772b66992d] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kindnet-54xjp" [cf89e4c7-5d54-48fb-9a94-76364e2f3d3c] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kindnet-gdk8g" [0eac7aaf-2341-4580-92d1-ea700cf2fa0f] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kindnet-mwpsf" [c191d87f-04fd-4e6c-b2fe-97e4c4e9db23] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kube-apiserver-ha-957600" [14343c48-f30d-430c-81e0-24b68835b4fd] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kube-apiserver-ha-957600-m02" [3ba7d864-6b01-4152-8027-2fe8e0d5d6bb] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kube-apiserver-ha-957600-m03" [4ad9ac9f-d682-431a-8a91-42e27c853f2b] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kube-controller-manager-ha-957600" [3cc0e64f-a1d7-4062-b78a-b9de960cf935] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kube-controller-manager-ha-957600-m02" [fb9dba99-8e76-4c2f-b427-de3fee7d0300] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kube-controller-manager-ha-957600-m03" [54e543ef-8ef5-43e4-b669-71eba6c9b629] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kube-proxy-9qwpr" [424d5d60-76b3-47ce-bc8f-75f61fccdd9a] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kube-proxy-j29r7" [e87fe1ac-6577-44e3-af8f-c28e878fea08] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kube-proxy-z94m6" [cdd33d94-1a1c-4038-aeda-0c6e1d68e559] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kube-scheduler-ha-957600" [28ad5883-d593-42a7-952f-0038a7bb25d6] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kube-scheduler-ha-957600-m02" [d3a27ea9-a208-4278-8a50-332971e8a78c] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kube-scheduler-ha-957600-m03" [3288ad97-c220-44a6-bde1-a329e7dab060] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kube-vip-ha-957600" [2780187a-2cd6-43da-93bd-73c0dc959228] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kube-vip-ha-957600-m02" [0908b051-1096-41ae-b457-36b2162ae907] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "kube-vip-ha-957600-m03" [30ec686b-5763-4ed8-b4d7-a7eab172d0d8] Running
	I0612 13:46:17.511078    7444 system_pods.go:89] "storage-provisioner" [9a5d025e-c240-4084-a1bd-1db96161d3b3] Running
	I0612 13:46:17.511078    7444 system_pods.go:126] duration metric: took 221.6467ms to wait for k8s-apps to be running ...
	I0612 13:46:17.511078    7444 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 13:46:17.524737    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 13:46:17.553797    7444 system_svc.go:56] duration metric: took 42.7192ms WaitForService to wait for kubelet
	I0612 13:46:17.554662    7444 kubeadm.go:576] duration metric: took 18.8637783s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 13:46:17.554662    7444 node_conditions.go:102] verifying NodePressure condition ...
	I0612 13:46:17.690117    7444 request.go:629] Waited for 134.9103ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.203.104:8443/api/v1/nodes
	I0612 13:46:17.690117    7444 round_trippers.go:463] GET https://172.23.203.104:8443/api/v1/nodes
	I0612 13:46:17.690117    7444 round_trippers.go:469] Request Headers:
	I0612 13:46:17.690117    7444 round_trippers.go:473]     Accept: application/json, */*
	I0612 13:46:17.690117    7444 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 13:46:17.695714    7444 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 13:46:17.697422    7444 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 13:46:17.697505    7444 node_conditions.go:123] node cpu capacity is 2
	I0612 13:46:17.697505    7444 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 13:46:17.697505    7444 node_conditions.go:123] node cpu capacity is 2
	I0612 13:46:17.697505    7444 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 13:46:17.697505    7444 node_conditions.go:123] node cpu capacity is 2
	I0612 13:46:17.697505    7444 node_conditions.go:105] duration metric: took 142.8429ms to run NodePressure ...
	I0612 13:46:17.697505    7444 start.go:240] waiting for startup goroutines ...
	I0612 13:46:17.697593    7444 start.go:254] writing updated cluster config ...
	I0612 13:46:17.710529    7444 ssh_runner.go:195] Run: rm -f paused
	I0612 13:46:17.853956    7444 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0612 13:46:17.858608    7444 out.go:177] * Done! kubectl is now configured to use "ha-957600" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jun 12 20:38:29 ha-957600 dockerd[1320]: time="2024-06-12T20:38:29.273961261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:38:29 ha-957600 dockerd[1320]: time="2024-06-12T20:38:29.328931910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 12 20:38:29 ha-957600 dockerd[1320]: time="2024-06-12T20:38:29.329131410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 12 20:38:29 ha-957600 dockerd[1320]: time="2024-06-12T20:38:29.329406710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:38:29 ha-957600 dockerd[1320]: time="2024-06-12T20:38:29.329772810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:38:29 ha-957600 cri-dockerd[1222]: time="2024-06-12T20:38:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3989d97bb5bda163f1208a3d3ee259dc20986f91707fdaec72fbfd6f332c3a6a/resolv.conf as [nameserver 172.23.192.1]"
	Jun 12 20:38:29 ha-957600 cri-dockerd[1222]: time="2024-06-12T20:38:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/49b8332111b26e8a790f08afa32dac04688488303f4dcb0d529686fe5ef51560/resolv.conf as [nameserver 172.23.192.1]"
	Jun 12 20:38:29 ha-957600 dockerd[1320]: time="2024-06-12T20:38:29.897486456Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 12 20:38:29 ha-957600 dockerd[1320]: time="2024-06-12T20:38:29.897933957Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 12 20:38:29 ha-957600 dockerd[1320]: time="2024-06-12T20:38:29.898047758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:38:29 ha-957600 dockerd[1320]: time="2024-06-12T20:38:29.898630360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:38:29 ha-957600 dockerd[1320]: time="2024-06-12T20:38:29.929102167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 12 20:38:29 ha-957600 dockerd[1320]: time="2024-06-12T20:38:29.929248068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 12 20:38:29 ha-957600 dockerd[1320]: time="2024-06-12T20:38:29.929267668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:38:29 ha-957600 dockerd[1320]: time="2024-06-12T20:38:29.929372468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:46:56 ha-957600 dockerd[1320]: time="2024-06-12T20:46:56.559269747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 12 20:46:56 ha-957600 dockerd[1320]: time="2024-06-12T20:46:56.559437648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 12 20:46:56 ha-957600 dockerd[1320]: time="2024-06-12T20:46:56.559462348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:46:56 ha-957600 dockerd[1320]: time="2024-06-12T20:46:56.560499052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:46:56 ha-957600 cri-dockerd[1222]: time="2024-06-12T20:46:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2705e1162b2dfa56928107ee31e11cffe2a28d10a5ef252a20ac33fd3cd1e2c0/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 12 20:46:58 ha-957600 cri-dockerd[1222]: time="2024-06-12T20:46:58Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jun 12 20:46:58 ha-957600 dockerd[1320]: time="2024-06-12T20:46:58.730677826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 12 20:46:58 ha-957600 dockerd[1320]: time="2024-06-12T20:46:58.731144528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 12 20:46:58 ha-957600 dockerd[1320]: time="2024-06-12T20:46:58.731241729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 20:46:58 ha-957600 dockerd[1320]: time="2024-06-12T20:46:58.731977932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	84e2387ee8a13       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago      Running             busybox                   0                   2705e1162b2df       busybox-fc5497c4f-q7zbt
	ec42c746f91c3       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   49b8332111b26       coredns-7db6d8ff4d-wv2wz
	c8abc35b31bc6       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   3989d97bb5bda       coredns-7db6d8ff4d-fvjdp
	f3fb45713a32c       6e38f40d628db                                                                                         26 minutes ago      Running             storage-provisioner       0                   8c96efd997764       storage-provisioner
	6d98838ddf5ec       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              26 minutes ago      Running             kindnet-cni               0                   86884c3e05d62       kindnet-gdk8g
	acce2e5331821       747097150317f                                                                                         26 minutes ago      Running             kube-proxy                0                   935f2939503f5       kube-proxy-z94m6
	12d6ecaecdbef       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     27 minutes ago      Running             kube-vip                  0                   1346e47f9a054       kube-vip-ha-957600
	89d14f1e8c68d       a52dc94f0a912                                                                                         27 minutes ago      Running             kube-scheduler            0                   1773fa0ee02c8       kube-scheduler-ha-957600
	cf6a5b6c15824       3861cfcd7c04c                                                                                         27 minutes ago      Running             etcd                      0                   b2a101629276d       etcd-ha-957600
	488300684bb24       91be940803172                                                                                         27 minutes ago      Running             kube-apiserver            0                   91bc2d6c42e45       kube-apiserver-ha-957600
	f85741f3c269e       25a1387cdab82                                                                                         27 minutes ago      Running             kube-controller-manager   0                   38266d831298c       kube-controller-manager-ha-957600
	
	
	==> coredns [c8abc35b31bc] <==
	[INFO] 10.244.0.4:42986 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.127821489s
	[INFO] 10.244.0.4:33965 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000166901s
	[INFO] 10.244.0.4:58509 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000362101s
	[INFO] 10.244.0.4:50272 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142601s
	[INFO] 10.244.0.4:33112 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000182601s
	[INFO] 10.244.1.2:47306 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000419002s
	[INFO] 10.244.1.2:59985 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000063s
	[INFO] 10.244.1.2:48089 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000065801s
	[INFO] 10.244.1.2:42781 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.004542721s
	[INFO] 10.244.1.2:60731 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000654s
	[INFO] 10.244.1.2:54446 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001089s
	[INFO] 10.244.1.2:58167 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001481s
	[INFO] 10.244.2.2:52082 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107901s
	[INFO] 10.244.2.2:55279 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177001s
	[INFO] 10.244.2.2:57294 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070401s
	[INFO] 10.244.0.4:33423 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135201s
	[INFO] 10.244.0.4:41826 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001169s
	[INFO] 10.244.0.4:46427 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001253s
	[INFO] 10.244.0.4:38094 - 5 "PTR IN 1.192.23.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000251701s
	[INFO] 10.244.1.2:55510 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123s
	[INFO] 10.244.1.2:37225 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000615s
	[INFO] 10.244.1.2:53395 - 5 "PTR IN 1.192.23.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000059201s
	[INFO] 10.244.2.2:35852 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148501s
	[INFO] 10.244.2.2:54338 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000582s
	[INFO] 10.244.2.2:41334 - 5 "PTR IN 1.192.23.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000662s
	
	
	==> coredns [ec42c746f91c] <==
	[INFO] 10.244.2.2:54866 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000079801s
	[INFO] 10.244.2.2:41940 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.093995646s
	[INFO] 10.244.0.4:43707 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221101s
	[INFO] 10.244.0.4:43167 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000204101s
	[INFO] 10.244.0.4:39327 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012035855s
	[INFO] 10.244.1.2:49888 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000197201s
	[INFO] 10.244.1.2:45990 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000215501s
	[INFO] 10.244.1.2:59692 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000338902s
	[INFO] 10.244.2.2:33811 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000207001s
	[INFO] 10.244.2.2:38677 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.038436877s
	[INFO] 10.244.2.2:48262 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000737s
	[INFO] 10.244.2.2:46710 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000183401s
	[INFO] 10.244.2.2:57557 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000167801s
	[INFO] 10.244.2.2:43514 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000152001s
	[INFO] 10.244.2.2:48911 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000233301s
	[INFO] 10.244.2.2:35403 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000651s
	[INFO] 10.244.0.4:33256 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000196101s
	[INFO] 10.244.0.4:42388 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000254601s
	[INFO] 10.244.0.4:33200 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156101s
	[INFO] 10.244.0.4:57990 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000558s
	[INFO] 10.244.1.2:56220 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084501s
	[INFO] 10.244.1.2:37649 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001667s
	[INFO] 10.244.2.2:59667 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000825s
	[INFO] 10.244.1.2:40342 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000300902s
	[INFO] 10.244.2.2:35837 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000256302s
	
	
	==> describe nodes <==
	Name:               ha-957600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-957600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97
	                    minikube.k8s.io/name=ha-957600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_12T13_38_05_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 20:38:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-957600
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 21:05:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 21:02:24 +0000   Wed, 12 Jun 2024 20:38:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 21:02:24 +0000   Wed, 12 Jun 2024 20:38:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 21:02:24 +0000   Wed, 12 Jun 2024 20:38:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 21:02:24 +0000   Wed, 12 Jun 2024 20:38:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.203.104
	  Hostname:    ha-957600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 e64d934c319b441bbd9685ce84fc68cf
	  System UUID:                fdad1bc4-ac9b-c541-b232-922aa0850b6e
	  Boot ID:                    1c97c559-dc70-4810-9425-0df71a26d678
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-q7zbt              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 coredns-7db6d8ff4d-fvjdp             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 coredns-7db6d8ff4d-wv2wz             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-ha-957600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kindnet-gdk8g                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-ha-957600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-ha-957600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-z94m6                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-ha-957600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-vip-ha-957600                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26m   kube-proxy       
	  Normal  Starting                 27m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  27m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m   kubelet          Node ha-957600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m   kubelet          Node ha-957600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m   kubelet          Node ha-957600 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27m   node-controller  Node ha-957600 event: Registered Node ha-957600 in Controller
	  Normal  NodeReady                26m   kubelet          Node ha-957600 status is now: NodeReady
	  Normal  RegisteredNode           22m   node-controller  Node ha-957600 event: Registered Node ha-957600 in Controller
	  Normal  RegisteredNode           19m   node-controller  Node ha-957600 event: Registered Node ha-957600 in Controller
	
	
	Name:               ha-957600-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-957600-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97
	                    minikube.k8s.io/name=ha-957600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_12T13_42_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 20:41:59 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-957600-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 21:03:56 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 12 Jun 2024 21:02:26 +0000   Wed, 12 Jun 2024 21:04:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 12 Jun 2024 21:02:26 +0000   Wed, 12 Jun 2024 21:04:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 12 Jun 2024 21:02:26 +0000   Wed, 12 Jun 2024 21:04:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 12 Jun 2024 21:02:26 +0000   Wed, 12 Jun 2024 21:04:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.23.201.185
	  Hostname:    ha-957600-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 af67b7ed4f1d4a4cab4cfeaf81c8f5c6
	  System UUID:                4178c1bc-b702-0c4d-a862-c03e19bffe95
	  Boot ID:                    36912f38-1473-4e04-b9e9-e6a5f42c71db
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qhrx6                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 etcd-ha-957600-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-54xjp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-apiserver-ha-957600-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-ha-957600-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-j29r7                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-scheduler-ha-957600-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-vip-ha-957600-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node ha-957600-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node ha-957600-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node ha-957600-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           23m                node-controller  Node ha-957600-m02 event: Registered Node ha-957600-m02 in Controller
	  Normal  RegisteredNode           22m                node-controller  Node ha-957600-m02 event: Registered Node ha-957600-m02 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-957600-m02 event: Registered Node ha-957600-m02 in Controller
	  Normal  NodeNotReady             40s                node-controller  Node ha-957600-m02 status is now: NodeNotReady
	
	
	Name:               ha-957600-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-957600-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97
	                    minikube.k8s.io/name=ha-957600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_12T13_45_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 20:45:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-957600-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 21:05:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 21:02:44 +0000   Wed, 12 Jun 2024 20:45:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 21:02:44 +0000   Wed, 12 Jun 2024 20:45:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 21:02:44 +0000   Wed, 12 Jun 2024 20:45:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 21:02:44 +0000   Wed, 12 Jun 2024 20:46:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.207.166
	  Hostname:    ha-957600-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 f0e098564aa445b2b5848cea090e8e15
	  System UUID:                09d9a757-7f66-5d4a-a594-4d8a5f785e73
	  Boot ID:                    0d48cb69-b4e8-4aab-974b-a031614083df
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-sfrgv                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 etcd-ha-957600-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kindnet-mwpsf                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-apiserver-ha-957600-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-ha-957600-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-9qwpr                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-ha-957600-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-vip-ha-957600-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node ha-957600-m03 event: Registered Node ha-957600-m03 in Controller
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node ha-957600-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node ha-957600-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node ha-957600-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           19m                node-controller  Node ha-957600-m03 event: Registered Node ha-957600-m03 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-957600-m03 event: Registered Node ha-957600-m03 in Controller
	
	
	Name:               ha-957600-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-957600-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97
	                    minikube.k8s.io/name=ha-957600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_12T13_51_16_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 20:51:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-957600-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 21:05:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 21:01:58 +0000   Wed, 12 Jun 2024 20:51:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 21:01:58 +0000   Wed, 12 Jun 2024 20:51:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 21:01:58 +0000   Wed, 12 Jun 2024 20:51:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 21:01:58 +0000   Wed, 12 Jun 2024 20:51:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.205.43
	  Hostname:    ha-957600-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 5373bf07807440b48e603646b3ab8245
	  System UUID:                efcd18fb-09a6-444c-81c9-fc0b0f505a01
	  Boot ID:                    7e36d6bf-32d3-4485-99bb-feb4389706e6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-lnnq2       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-proxy-sc9sn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  RegisteredNode           14m                node-controller  Node ha-957600-m04 event: Registered Node ha-957600-m04 in Controller
	  Normal  NodeHasSufficientMemory  14m (x2 over 14m)  kubelet          Node ha-957600-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x2 over 14m)  kubelet          Node ha-957600-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x2 over 14m)  kubelet          Node ha-957600-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node ha-957600-m04 event: Registered Node ha-957600-m04 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-957600-m04 event: Registered Node ha-957600-m04 in Controller
	  Normal  NodeReady                13m                kubelet          Node ha-957600-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +48.322498] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.166864] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[Jun12 20:37] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +0.105818] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.581524] systemd-fstab-generator[977]: Ignoring "noauto" option for root device
	[  +0.213203] systemd-fstab-generator[990]: Ignoring "noauto" option for root device
	[  +0.242095] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[  +2.828884] systemd-fstab-generator[1176]: Ignoring "noauto" option for root device
	[  +0.193507] systemd-fstab-generator[1187]: Ignoring "noauto" option for root device
	[  +0.220915] systemd-fstab-generator[1199]: Ignoring "noauto" option for root device
	[  +0.260611] systemd-fstab-generator[1215]: Ignoring "noauto" option for root device
	[ +11.242976] systemd-fstab-generator[1306]: Ignoring "noauto" option for root device
	[  +0.108187] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.444115] systemd-fstab-generator[1512]: Ignoring "noauto" option for root device
	[  +6.936173] systemd-fstab-generator[1714]: Ignoring "noauto" option for root device
	[  +0.112377] kauditd_printk_skb: 73 callbacks suppressed
	[  +5.878577] kauditd_printk_skb: 67 callbacks suppressed
	[Jun12 20:38] systemd-fstab-generator[2207]: Ignoring "noauto" option for root device
	[ +13.771844] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.634506] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.398361] kauditd_printk_skb: 19 callbacks suppressed
	[Jun12 20:40] hrtimer: interrupt took 2150712 ns
	[Jun12 20:42] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [cf6a5b6c1582] <==
	{"level":"warn","ts":"2024-06-12T21:05:17.095669Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f62ce7ee06bc2052","from":"f62ce7ee06bc2052","remote-peer-id":"609cb8ee838d4599","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T21:05:17.098452Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f62ce7ee06bc2052","from":"f62ce7ee06bc2052","remote-peer-id":"609cb8ee838d4599","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T21:05:17.106999Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f62ce7ee06bc2052","from":"f62ce7ee06bc2052","remote-peer-id":"609cb8ee838d4599","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T21:05:17.12565Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f62ce7ee06bc2052","from":"f62ce7ee06bc2052","remote-peer-id":"609cb8ee838d4599","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T21:05:17.126044Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f62ce7ee06bc2052","from":"f62ce7ee06bc2052","remote-peer-id":"609cb8ee838d4599","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T21:05:17.135023Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f62ce7ee06bc2052","from":"f62ce7ee06bc2052","remote-peer-id":"609cb8ee838d4599","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T21:05:17.143393Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f62ce7ee06bc2052","from":"f62ce7ee06bc2052","remote-peer-id":"609cb8ee838d4599","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T21:05:17.149786Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f62ce7ee06bc2052","from":"f62ce7ee06bc2052","remote-peer-id":"609cb8ee838d4599","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T21:05:17.154537Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f62ce7ee06bc2052","from":"f62ce7ee06bc2052","remote-peer-id":"609cb8ee838d4599","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T21:05:17.165623Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f62ce7ee06bc2052","from":"f62ce7ee06bc2052","remote-peer-id":"609cb8ee838d4599","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T21:05:17.172992Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f62ce7ee06bc2052","from":"f62ce7ee06bc2052","remote-peer-id":"609cb8ee838d4599","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T21:05:17.182298Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f62ce7ee06bc2052","from":"f62ce7ee06bc2052","remote-peer-id":"609cb8ee838d4599","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T21:05:17.190345Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f62ce7ee06bc2052","from":"f62ce7ee06bc2052","remote-peer-id":"609cb8ee838d4599","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T21:05:17.194901Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f62ce7ee06bc2052","from":"f62ce7ee06bc2052","remote-peer-id":"609cb8ee838d4599","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T21:05:17.205241Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f62ce7ee06bc2052","from":"f62ce7ee06bc2052","remote-peer-id":"609cb8ee838d4599","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T21:05:17.221395Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f62ce7ee06bc2052","from":"f62ce7ee06bc2052","remote-peer-id":"609cb8ee838d4599","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T21:05:17.226667Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f62ce7ee06bc2052","from":"f62ce7ee06bc2052","remote-peer-id":"609cb8ee838d4599","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T21:05:17.23295Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f62ce7ee06bc2052","from":"f62ce7ee06bc2052","remote-peer-id":"609cb8ee838d4599","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T21:05:17.238676Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f62ce7ee06bc2052","from":"f62ce7ee06bc2052","remote-peer-id":"609cb8ee838d4599","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T21:05:17.244727Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f62ce7ee06bc2052","from":"f62ce7ee06bc2052","remote-peer-id":"609cb8ee838d4599","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T21:05:17.256419Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f62ce7ee06bc2052","from":"f62ce7ee06bc2052","remote-peer-id":"609cb8ee838d4599","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T21:05:17.268418Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f62ce7ee06bc2052","from":"f62ce7ee06bc2052","remote-peer-id":"609cb8ee838d4599","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T21:05:17.273931Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f62ce7ee06bc2052","from":"f62ce7ee06bc2052","remote-peer-id":"609cb8ee838d4599","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T21:05:17.27854Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f62ce7ee06bc2052","from":"f62ce7ee06bc2052","remote-peer-id":"609cb8ee838d4599","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-12T21:05:17.326887Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f62ce7ee06bc2052","from":"f62ce7ee06bc2052","remote-peer-id":"609cb8ee838d4599","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 21:05:17 up 29 min,  0 users,  load average: 0.42, 0.54, 0.46
	Linux ha-957600 5.10.207 #1 SMP Tue Jun 11 00:16:05 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6d98838ddf5e] <==
	I0612 21:04:37.749263       1 main.go:250] Node ha-957600-m04 has CIDR [10.244.3.0/24] 
	I0612 21:04:47.758471       1 main.go:223] Handling node with IPs: map[172.23.203.104:{}]
	I0612 21:04:47.758818       1 main.go:227] handling current node
	I0612 21:04:47.758870       1 main.go:223] Handling node with IPs: map[172.23.201.185:{}]
	I0612 21:04:47.758880       1 main.go:250] Node ha-957600-m02 has CIDR [10.244.1.0/24] 
	I0612 21:04:47.759227       1 main.go:223] Handling node with IPs: map[172.23.207.166:{}]
	I0612 21:04:47.759444       1 main.go:250] Node ha-957600-m03 has CIDR [10.244.2.0/24] 
	I0612 21:04:47.759877       1 main.go:223] Handling node with IPs: map[172.23.205.43:{}]
	I0612 21:04:47.759908       1 main.go:250] Node ha-957600-m04 has CIDR [10.244.3.0/24] 
	I0612 21:04:57.773900       1 main.go:223] Handling node with IPs: map[172.23.203.104:{}]
	I0612 21:04:57.773948       1 main.go:227] handling current node
	I0612 21:04:57.773963       1 main.go:223] Handling node with IPs: map[172.23.201.185:{}]
	I0612 21:04:57.773970       1 main.go:250] Node ha-957600-m02 has CIDR [10.244.1.0/24] 
	I0612 21:04:57.774415       1 main.go:223] Handling node with IPs: map[172.23.207.166:{}]
	I0612 21:04:57.774510       1 main.go:250] Node ha-957600-m03 has CIDR [10.244.2.0/24] 
	I0612 21:04:57.775616       1 main.go:223] Handling node with IPs: map[172.23.205.43:{}]
	I0612 21:04:57.775723       1 main.go:250] Node ha-957600-m04 has CIDR [10.244.3.0/24] 
	I0612 21:05:07.784149       1 main.go:223] Handling node with IPs: map[172.23.203.104:{}]
	I0612 21:05:07.784349       1 main.go:227] handling current node
	I0612 21:05:07.784492       1 main.go:223] Handling node with IPs: map[172.23.201.185:{}]
	I0612 21:05:07.784810       1 main.go:250] Node ha-957600-m02 has CIDR [10.244.1.0/24] 
	I0612 21:05:07.785510       1 main.go:223] Handling node with IPs: map[172.23.207.166:{}]
	I0612 21:05:07.785701       1 main.go:250] Node ha-957600-m03 has CIDR [10.244.2.0/24] 
	I0612 21:05:07.785843       1 main.go:223] Handling node with IPs: map[172.23.205.43:{}]
	I0612 21:05:07.785997       1 main.go:250] Node ha-957600-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [488300684bb2] <==
	I0612 20:38:04.471406       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0612 20:38:04.535397       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0612 20:38:04.563217       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0612 20:38:17.171073       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0612 20:38:17.279149       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0612 20:45:51.827352       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 12.4µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0612 20:45:51.833936       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0612 20:45:51.833976       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0612 20:45:51.840471       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0612 20:45:51.841089       1 timeout.go:142] post-timeout activity - time-elapsed: 22.609883ms, PATCH "/api/v1/namespaces/default/events/ha-957600-m03.17d85cabd2a3635e" result: <nil>
	E0612 20:47:02.410874       1 conn.go:339] Error on socket receive: read tcp 172.23.207.254:8443->172.23.192.1:59693: use of closed network connection
	E0612 20:47:02.927088       1 conn.go:339] Error on socket receive: read tcp 172.23.207.254:8443->172.23.192.1:59695: use of closed network connection
	E0612 20:47:04.507868       1 conn.go:339] Error on socket receive: read tcp 172.23.207.254:8443->172.23.192.1:59697: use of closed network connection
	E0612 20:47:05.139717       1 conn.go:339] Error on socket receive: read tcp 172.23.207.254:8443->172.23.192.1:59699: use of closed network connection
	E0612 20:47:05.594031       1 conn.go:339] Error on socket receive: read tcp 172.23.207.254:8443->172.23.192.1:59701: use of closed network connection
	E0612 20:47:06.083903       1 conn.go:339] Error on socket receive: read tcp 172.23.207.254:8443->172.23.192.1:59703: use of closed network connection
	E0612 20:47:06.549253       1 conn.go:339] Error on socket receive: read tcp 172.23.207.254:8443->172.23.192.1:59705: use of closed network connection
	E0612 20:47:06.995254       1 conn.go:339] Error on socket receive: read tcp 172.23.207.254:8443->172.23.192.1:59707: use of closed network connection
	E0612 20:47:07.435015       1 conn.go:339] Error on socket receive: read tcp 172.23.207.254:8443->172.23.192.1:59709: use of closed network connection
	E0612 20:47:08.214177       1 conn.go:339] Error on socket receive: read tcp 172.23.207.254:8443->172.23.192.1:59712: use of closed network connection
	E0612 20:47:18.674272       1 conn.go:339] Error on socket receive: read tcp 172.23.207.254:8443->172.23.192.1:59714: use of closed network connection
	E0612 20:47:19.101737       1 conn.go:339] Error on socket receive: read tcp 172.23.207.254:8443->172.23.192.1:59717: use of closed network connection
	E0612 20:47:29.519735       1 conn.go:339] Error on socket receive: read tcp 172.23.207.254:8443->172.23.192.1:59719: use of closed network connection
	E0612 20:47:29.946662       1 conn.go:339] Error on socket receive: read tcp 172.23.207.254:8443->172.23.192.1:59723: use of closed network connection
	E0612 20:47:40.389190       1 conn.go:339] Error on socket receive: read tcp 172.23.207.254:8443->172.23.192.1:59725: use of closed network connection
	
	
	==> kube-controller-manager [f85741f3c269] <==
	I0612 20:45:51.046695       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-957600-m03" podCIDRs=["10.244.2.0/24"]
	I0612 20:45:51.814004       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-957600-m03"
	I0612 20:46:55.513166       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="181.083148ms"
	I0612 20:46:55.560408       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.081868ms"
	I0612 20:46:55.942181       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="381.551064ms"
	I0612 20:46:56.240327       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="296.490861ms"
	I0612 20:46:56.286745       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.059065ms"
	I0612 20:46:56.392521       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="105.631078ms"
	I0612 20:46:56.393057       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="279.201µs"
	I0612 20:46:56.469856       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.53718ms"
	I0612 20:46:56.469962       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.9µs"
	I0612 20:46:58.899855       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.744337ms"
	I0612 20:46:58.956982       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.902564ms"
	I0612 20:46:58.959480       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="991.404µs"
	I0612 20:46:59.008748       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.439385ms"
	I0612 20:46:59.009419       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="301.501µs"
	I0612 20:47:00.009698       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="93.315631ms"
	I0612 20:47:00.010419       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="580.303µs"
	I0612 20:51:16.068832       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-957600-m04\" does not exist"
	I0612 20:51:16.155019       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-957600-m04" podCIDRs=["10.244.3.0/24"]
	I0612 20:51:16.913988       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-957600-m04"
	I0612 20:51:39.545526       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-957600-m04"
	I0612 21:04:37.119637       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-957600-m04"
	I0612 21:04:37.221849       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.117947ms"
	I0612 21:04:37.222015       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.9µs"
	
	
	==> kube-proxy [acce2e533182] <==
	I0612 20:38:18.299043       1 server_linux.go:69] "Using iptables proxy"
	I0612 20:38:18.312357       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.203.104"]
	I0612 20:38:18.380210       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 20:38:18.380639       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 20:38:18.380772       1 server_linux.go:165] "Using iptables Proxier"
	I0612 20:38:18.386825       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 20:38:18.387128       1 server.go:872] "Version info" version="v1.30.1"
	I0612 20:38:18.387274       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 20:38:18.390104       1 config.go:192] "Starting service config controller"
	I0612 20:38:18.390243       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 20:38:18.390298       1 config.go:101] "Starting endpoint slice config controller"
	I0612 20:38:18.390305       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 20:38:18.390855       1 config.go:319] "Starting node config controller"
	I0612 20:38:18.390952       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 20:38:18.491013       1 shared_informer.go:320] Caches are synced for node config
	I0612 20:38:18.491149       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0612 20:38:18.491178       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [89d14f1e8c68] <==
	W0612 20:38:01.664260       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0612 20:38:01.664982       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0612 20:38:01.664535       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0612 20:38:01.665801       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0612 20:38:03.362833       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0612 20:46:55.423799       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-qhrx6\": pod busybox-fc5497c4f-qhrx6 is already assigned to node \"ha-957600-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-qhrx6" node="ha-957600-m02"
	E0612 20:46:55.423985       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 438e6a8d-a078-4633-a2e8-5a41e507ad81(default/busybox-fc5497c4f-qhrx6) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-qhrx6"
	E0612 20:46:55.424805       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-qhrx6\": pod busybox-fc5497c4f-qhrx6 is already assigned to node \"ha-957600-m02\"" pod="default/busybox-fc5497c4f-qhrx6"
	I0612 20:46:55.424983       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-qhrx6" node="ha-957600-m02"
	E0612 20:46:55.459641       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-sfrgv\": pod busybox-fc5497c4f-sfrgv is already assigned to node \"ha-957600-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-sfrgv" node="ha-957600-m03"
	E0612 20:46:55.462990       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 65f290c0-f814-4897-98f8-5d944ca8ad36(default/busybox-fc5497c4f-sfrgv) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-sfrgv"
	E0612 20:46:55.463122       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-sfrgv\": pod busybox-fc5497c4f-sfrgv is already assigned to node \"ha-957600-m03\"" pod="default/busybox-fc5497c4f-sfrgv"
	I0612 20:46:55.463161       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-sfrgv" node="ha-957600-m03"
	E0612 20:46:55.481780       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-q7zbt\": pod busybox-fc5497c4f-q7zbt is already assigned to node \"ha-957600\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-q7zbt" node="ha-957600"
	E0612 20:46:55.481825       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 76d70f3c-f134-446f-8649-2f89690c9ae0(default/busybox-fc5497c4f-q7zbt) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-q7zbt"
	E0612 20:46:55.481842       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-q7zbt\": pod busybox-fc5497c4f-q7zbt is already assigned to node \"ha-957600\"" pod="default/busybox-fc5497c4f-q7zbt"
	I0612 20:46:55.481859       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-q7zbt" node="ha-957600"
	E0612 20:51:16.230385       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-sc9sn\": pod kube-proxy-sc9sn is already assigned to node \"ha-957600-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-sc9sn" node="ha-957600-m04"
	E0612 20:51:16.231088       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 02429bc4-a3a5-46dd-94c6-e1af0a3e0e26(kube-system/kube-proxy-sc9sn) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-sc9sn"
	E0612 20:51:16.231206       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-sc9sn\": pod kube-proxy-sc9sn is already assigned to node \"ha-957600-m04\"" pod="kube-system/kube-proxy-sc9sn"
	I0612 20:51:16.231369       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-sc9sn" node="ha-957600-m04"
	E0612 20:51:16.230812       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-lnnq2\": pod kindnet-lnnq2 is already assigned to node \"ha-957600-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-lnnq2" node="ha-957600-m04"
	E0612 20:51:16.241827       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 7683e196-3829-4986-8da3-16478947dcac(kube-system/kindnet-lnnq2) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-lnnq2"
	E0612 20:51:16.242135       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-lnnq2\": pod kindnet-lnnq2 is already assigned to node \"ha-957600-m04\"" pod="kube-system/kindnet-lnnq2"
	I0612 20:51:16.242261       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-lnnq2" node="ha-957600-m04"
	
	
	==> kubelet <==
	Jun 12 21:01:04 ha-957600 kubelet[2214]: E0612 21:01:04.615114    2214 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 21:01:04 ha-957600 kubelet[2214]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 21:01:04 ha-957600 kubelet[2214]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 21:01:04 ha-957600 kubelet[2214]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 21:01:04 ha-957600 kubelet[2214]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 21:02:04 ha-957600 kubelet[2214]: E0612 21:02:04.614841    2214 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 21:02:04 ha-957600 kubelet[2214]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 21:02:04 ha-957600 kubelet[2214]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 21:02:04 ha-957600 kubelet[2214]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 21:02:04 ha-957600 kubelet[2214]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 21:03:04 ha-957600 kubelet[2214]: E0612 21:03:04.620514    2214 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 21:03:04 ha-957600 kubelet[2214]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 21:03:04 ha-957600 kubelet[2214]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 21:03:04 ha-957600 kubelet[2214]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 21:03:04 ha-957600 kubelet[2214]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 21:04:04 ha-957600 kubelet[2214]: E0612 21:04:04.615445    2214 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 21:04:04 ha-957600 kubelet[2214]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 21:04:04 ha-957600 kubelet[2214]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 21:04:04 ha-957600 kubelet[2214]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 21:04:04 ha-957600 kubelet[2214]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 21:05:04 ha-957600 kubelet[2214]: E0612 21:05:04.614803    2214 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 21:05:04 ha-957600 kubelet[2214]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 21:05:04 ha-957600 kubelet[2214]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 21:05:04 ha-957600 kubelet[2214]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 21:05:04 ha-957600 kubelet[2214]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0612 14:05:08.989839    2228 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-957600 -n ha-957600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-957600 -n ha-957600: (12.4480566s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-957600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (104.72s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (183.88s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-443500
E0612 14:32:55.151831    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
E0612 14:34:51.915474    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
mount_start_test.go:166: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p mount-start-2-443500: exit status 90 (2m52.4652879s)

                                                
                                                
-- stdout --
	* [mount-start-2-443500] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19044
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting minikube without Kubernetes in cluster mount-start-2-443500
	* Restarting existing hyperv VM for "mount-start-2-443500" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0612 14:32:24.110057    6788 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 12 21:33:49 mount-start-2-443500 systemd[1]: Starting Docker Application Container Engine...
	Jun 12 21:33:49 mount-start-2-443500 dockerd[654]: time="2024-06-12T21:33:49.109455194Z" level=info msg="Starting up"
	Jun 12 21:33:49 mount-start-2-443500 dockerd[654]: time="2024-06-12T21:33:49.110417706Z" level=info msg="containerd not running, starting managed containerd"
	Jun 12 21:33:49 mount-start-2-443500 dockerd[654]: time="2024-06-12T21:33:49.116055380Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=660
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.150795433Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.178991000Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.179098602Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.179212303Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.179444506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.180100215Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.180156216Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.180437519Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.180553121Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.180580721Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.180594821Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.181317531Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.182191142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.185339983Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.185465185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.185682688Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.185792489Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.186396697Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.186526799Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.186545699Z" level=info msg="metadata content store policy set" policy=shared
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.188613926Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.188722927Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.188749128Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.188767528Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.188786628Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.188883329Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.189539338Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.189799941Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.189908543Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.189931243Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.189950043Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.189968544Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.189984244Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.190017944Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.190036344Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.190057245Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.190072945Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.190090145Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.190174746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.190351449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.190750054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.190882955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.190905256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.190921156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.190935356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.190959256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.190978957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.190996957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.191011257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.191026357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.191044258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.191063058Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.191089258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.191122359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.191135459Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.191316961Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.191346862Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.191360362Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.191375862Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.191388462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.191402062Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.191422363Z" level=info msg="NRI interface is disabled by configuration."
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.191804768Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.192156572Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.192208973Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 12 21:33:49 mount-start-2-443500 dockerd[660]: time="2024-06-12T21:33:49.192305974Z" level=info msg="containerd successfully booted in 0.043852s"
	Jun 12 21:33:50 mount-start-2-443500 dockerd[654]: time="2024-06-12T21:33:50.165205320Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 12 21:33:50 mount-start-2-443500 dockerd[654]: time="2024-06-12T21:33:50.190713732Z" level=info msg="Loading containers: start."
	Jun 12 21:33:50 mount-start-2-443500 dockerd[654]: time="2024-06-12T21:33:50.439992617Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 12 21:33:50 mount-start-2-443500 dockerd[654]: time="2024-06-12T21:33:50.520806039Z" level=info msg="Loading containers: done."
	Jun 12 21:33:50 mount-start-2-443500 dockerd[654]: time="2024-06-12T21:33:50.550477988Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 12 21:33:50 mount-start-2-443500 dockerd[654]: time="2024-06-12T21:33:50.551557204Z" level=info msg="Daemon has completed initialization"
	Jun 12 21:33:50 mount-start-2-443500 systemd[1]: Started Docker Application Container Engine.
	Jun 12 21:33:50 mount-start-2-443500 dockerd[654]: time="2024-06-12T21:33:50.614388554Z" level=info msg="API listen on [::]:2376"
	Jun 12 21:33:50 mount-start-2-443500 dockerd[654]: time="2024-06-12T21:33:50.614475655Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 12 21:34:15 mount-start-2-443500 systemd[1]: Stopping Docker Application Container Engine...
	Jun 12 21:34:15 mount-start-2-443500 dockerd[654]: time="2024-06-12T21:34:15.297800588Z" level=info msg="Processing signal 'terminated'"
	Jun 12 21:34:15 mount-start-2-443500 dockerd[654]: time="2024-06-12T21:34:15.300077294Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 12 21:34:15 mount-start-2-443500 dockerd[654]: time="2024-06-12T21:34:15.300787096Z" level=info msg="Daemon shutdown complete"
	Jun 12 21:34:15 mount-start-2-443500 dockerd[654]: time="2024-06-12T21:34:15.300873597Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 12 21:34:15 mount-start-2-443500 dockerd[654]: time="2024-06-12T21:34:15.300883797Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 12 21:34:16 mount-start-2-443500 systemd[1]: docker.service: Deactivated successfully.
	Jun 12 21:34:16 mount-start-2-443500 systemd[1]: Stopped Docker Application Container Engine.
	Jun 12 21:34:16 mount-start-2-443500 systemd[1]: Starting Docker Application Container Engine...
	Jun 12 21:34:16 mount-start-2-443500 dockerd[1027]: time="2024-06-12T21:34:16.373764705Z" level=info msg="Starting up"
	Jun 12 21:35:16 mount-start-2-443500 dockerd[1027]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 12 21:35:16 mount-start-2-443500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 12 21:35:16 mount-start-2-443500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 12 21:35:16 mount-start-2-443500 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:168: restart failed: "out/minikube-windows-amd64.exe start -p mount-start-2-443500" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-2-443500 -n mount-start-2-443500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-2-443500 -n mount-start-2-443500: exit status 6 (11.4026083s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0612 14:35:16.594788    4040 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0612 14:35:27.822725    4040 status.go:417] kubeconfig endpoint: get endpoint: "mount-start-2-443500" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-2-443500" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/RestartStopped (183.88s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (55.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-025000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-025000 -- exec busybox-fc5497c4f-45qqd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-025000 -- exec busybox-fc5497c4f-45qqd -- sh -c "ping -c 1 172.23.192.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-025000 -- exec busybox-fc5497c4f-45qqd -- sh -c "ping -c 1 172.23.192.1": exit status 1 (10.3944371s)

                                                
                                                
-- stdout --
	PING 172.23.192.1 (172.23.192.1): 56 data bytes
	
	--- 172.23.192.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0612 14:43:32.898851    1052 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.23.192.1) from pod (busybox-fc5497c4f-45qqd): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-025000 -- exec busybox-fc5497c4f-9bsls -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-025000 -- exec busybox-fc5497c4f-9bsls -- sh -c "ping -c 1 172.23.192.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-025000 -- exec busybox-fc5497c4f-9bsls -- sh -c "ping -c 1 172.23.192.1": exit status 1 (10.3839733s)

                                                
                                                
-- stdout --
	PING 172.23.192.1 (172.23.192.1): 56 data bytes
	
	--- 172.23.192.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0612 14:43:43.721946    8652 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.23.192.1) from pod (busybox-fc5497c4f-9bsls): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-025000 -n multinode-025000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-025000 -n multinode-025000: (11.6383988s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 logs -n 25: (8.1925822s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-443500                           | mount-start-2-443500 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:28 PDT | 12 Jun 24 14:31 PDT |
	|         | --memory=2048 --mount                             |                      |                   |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |                   |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |                   |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| mount   | C:\Users\jenkins.minikube1:/minikube-host         | mount-start-2-443500 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:31 PDT |                     |
	|         | --profile mount-start-2-443500 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-443500 ssh -- ls                    | mount-start-2-443500 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:31 PDT | 12 Jun 24 14:31 PDT |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-443500                           | mount-start-1-443500 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:31 PDT | 12 Jun 24 14:31 PDT |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-443500 ssh -- ls                    | mount-start-2-443500 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:31 PDT | 12 Jun 24 14:31 PDT |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-443500                           | mount-start-2-443500 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:31 PDT | 12 Jun 24 14:32 PDT |
	| start   | -p mount-start-2-443500                           | mount-start-2-443500 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:32 PDT |                     |
	| delete  | -p mount-start-2-443500                           | mount-start-2-443500 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:35 PDT | 12 Jun 24 14:36 PDT |
	| delete  | -p mount-start-1-443500                           | mount-start-1-443500 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:36 PDT | 12 Jun 24 14:36 PDT |
	| start   | -p multinode-025000                               | multinode-025000     | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:36 PDT | 12 Jun 24 14:43 PDT |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-025000 -- apply -f                   | multinode-025000     | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:43 PDT | 12 Jun 24 14:43 PDT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-025000 -- rollout                    | multinode-025000     | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:43 PDT | 12 Jun 24 14:43 PDT |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-025000 -- get pods -o                | multinode-025000     | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:43 PDT | 12 Jun 24 14:43 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-025000 -- get pods -o                | multinode-025000     | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:43 PDT | 12 Jun 24 14:43 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-025000 -- exec                       | multinode-025000     | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:43 PDT | 12 Jun 24 14:43 PDT |
	|         | busybox-fc5497c4f-45qqd --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-025000 -- exec                       | multinode-025000     | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:43 PDT | 12 Jun 24 14:43 PDT |
	|         | busybox-fc5497c4f-9bsls --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-025000 -- exec                       | multinode-025000     | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:43 PDT | 12 Jun 24 14:43 PDT |
	|         | busybox-fc5497c4f-45qqd --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-025000 -- exec                       | multinode-025000     | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:43 PDT | 12 Jun 24 14:43 PDT |
	|         | busybox-fc5497c4f-9bsls --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-025000 -- exec                       | multinode-025000     | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:43 PDT | 12 Jun 24 14:43 PDT |
	|         | busybox-fc5497c4f-45qqd -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-025000 -- exec                       | multinode-025000     | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:43 PDT | 12 Jun 24 14:43 PDT |
	|         | busybox-fc5497c4f-9bsls -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-025000 -- get pods -o                | multinode-025000     | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:43 PDT | 12 Jun 24 14:43 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-025000 -- exec                       | multinode-025000     | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:43 PDT | 12 Jun 24 14:43 PDT |
	|         | busybox-fc5497c4f-45qqd                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-025000 -- exec                       | multinode-025000     | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:43 PDT |                     |
	|         | busybox-fc5497c4f-45qqd -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.23.192.1                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-025000 -- exec                       | multinode-025000     | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:43 PDT | 12 Jun 24 14:43 PDT |
	|         | busybox-fc5497c4f-9bsls                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-025000 -- exec                       | multinode-025000     | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:43 PDT |                     |
	|         | busybox-fc5497c4f-9bsls -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.23.192.1                         |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/12 14:36:31
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0612 14:36:31.345035    6676 out.go:291] Setting OutFile to fd 1280 ...
	I0612 14:36:31.345612    6676 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 14:36:31.345612    6676 out.go:304] Setting ErrFile to fd 640...
	I0612 14:36:31.345612    6676 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 14:36:31.369684    6676 out.go:298] Setting JSON to false
	I0612 14:36:31.372617    6676 start.go:129] hostinfo: {"hostname":"minikube1","uptime":26544,"bootTime":1718201647,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4529 Build 19045.4529","kernelVersion":"10.0.19045.4529 Build 19045.4529","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0612 14:36:31.372617    6676 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0612 14:36:31.378693    6676 out.go:177] * [multinode-025000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	I0612 14:36:31.383114    6676 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 14:36:31.382958    6676 notify.go:220] Checking for updates...
	I0612 14:36:31.385920    6676 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 14:36:31.388304    6676 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0612 14:36:31.390659    6676 out.go:177]   - MINIKUBE_LOCATION=19044
	I0612 14:36:31.392753    6676 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 14:36:31.394519    6676 config.go:182] Loaded profile config "ha-957600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 14:36:31.397401    6676 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 14:36:36.614853    6676 out.go:177] * Using the hyperv driver based on user configuration
	I0612 14:36:36.618103    6676 start.go:297] selected driver: hyperv
	I0612 14:36:36.618103    6676 start.go:901] validating driver "hyperv" against <nil>
	I0612 14:36:36.618218    6676 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 14:36:36.664423    6676 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0612 14:36:36.665764    6676 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 14:36:36.665764    6676 cni.go:84] Creating CNI manager for ""
	I0612 14:36:36.665764    6676 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0612 14:36:36.665764    6676 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0612 14:36:36.665764    6676 start.go:340] cluster config:
	{Name:multinode-025000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-025000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 14:36:36.665764    6676 iso.go:125] acquiring lock: {Name:mk052eb609047b80b971cea5054470b0706b5b41 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 14:36:36.668724    6676 out.go:177] * Starting "multinode-025000" primary control-plane node in "multinode-025000" cluster
	I0612 14:36:36.673051    6676 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0612 14:36:36.673192    6676 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0612 14:36:36.673192    6676 cache.go:56] Caching tarball of preloaded images
	I0612 14:36:36.673192    6676 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0612 14:36:36.673737    6676 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0612 14:36:36.673988    6676 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\config.json ...
	I0612 14:36:36.674130    6676 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\config.json: {Name:mk2da5654a2125f50878a8669b423863653d6d17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 14:36:36.675726    6676 start.go:360] acquireMachinesLock for multinode-025000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 14:36:36.675944    6676 start.go:364] duration metric: took 111.7µs to acquireMachinesLock for "multinode-025000"
	I0612 14:36:36.676070    6676 start.go:93] Provisioning new machine with config: &{Name:multinode-025000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-025000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0612 14:36:36.676070    6676 start.go:125] createHost starting for "" (driver="hyperv")
	I0612 14:36:36.682995    6676 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0612 14:36:36.682995    6676 start.go:159] libmachine.API.Create for "multinode-025000" (driver="hyperv")
	I0612 14:36:36.682995    6676 client.go:168] LocalClient.Create starting
	I0612 14:36:36.682995    6676 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0612 14:36:36.682995    6676 main.go:141] libmachine: Decoding PEM data...
	I0612 14:36:36.682995    6676 main.go:141] libmachine: Parsing certificate...
	I0612 14:36:36.684212    6676 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0612 14:36:36.684212    6676 main.go:141] libmachine: Decoding PEM data...
	I0612 14:36:36.684212    6676 main.go:141] libmachine: Parsing certificate...
	I0612 14:36:36.684212    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0612 14:36:38.642943    6676 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0612 14:36:38.642943    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:36:38.642943    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0612 14:36:40.289576    6676 main.go:141] libmachine: [stdout =====>] : False
	
	I0612 14:36:40.289656    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:36:40.289730    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0612 14:36:41.723014    6676 main.go:141] libmachine: [stdout =====>] : True
	
	I0612 14:36:41.723014    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:36:41.727210    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0612 14:36:45.222439    6676 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0612 14:36:45.222439    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:36:45.237039    6676 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso...
	I0612 14:36:45.710575    6676 main.go:141] libmachine: Creating SSH key...
	I0612 14:36:46.032509    6676 main.go:141] libmachine: Creating VM...
	I0612 14:36:46.032509    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0612 14:36:48.738237    6676 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0612 14:36:48.745144    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:36:48.745268    6676 main.go:141] libmachine: Using switch "Default Switch"
	I0612 14:36:48.745268    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0612 14:36:50.390620    6676 main.go:141] libmachine: [stdout =====>] : True
	
	I0612 14:36:50.390620    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:36:50.390620    6676 main.go:141] libmachine: Creating VHD
	I0612 14:36:50.398999    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0612 14:36:54.026775    6676 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : B6479A2A-FCB1-46CF-9DD4-67007A06EC8D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0612 14:36:54.032910    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:36:54.032910    6676 main.go:141] libmachine: Writing magic tar header
	I0612 14:36:54.032910    6676 main.go:141] libmachine: Writing SSH key tar header
	I0612 14:36:54.042141    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0612 14:36:57.196543    6676 main.go:141] libmachine: [stdout =====>] : 
	I0612 14:36:57.208202    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:36:57.208259    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000\disk.vhd' -SizeBytes 20000MB
	I0612 14:36:59.651695    6676 main.go:141] libmachine: [stdout =====>] : 
	I0612 14:36:59.651786    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:36:59.651786    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-025000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0612 14:37:03.119819    6676 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-025000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0612 14:37:03.119819    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:37:03.131231    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-025000 -DynamicMemoryEnabled $false
	I0612 14:37:05.280002    6676 main.go:141] libmachine: [stdout =====>] : 
	I0612 14:37:05.280002    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:37:05.280002    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-025000 -Count 2
	I0612 14:37:07.349603    6676 main.go:141] libmachine: [stdout =====>] : 
	I0612 14:37:07.359971    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:37:07.359971    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-025000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000\boot2docker.iso'
	I0612 14:37:09.852097    6676 main.go:141] libmachine: [stdout =====>] : 
	I0612 14:37:09.852097    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:37:09.852097    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-025000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000\disk.vhd'
	I0612 14:37:12.413921    6676 main.go:141] libmachine: [stdout =====>] : 
	I0612 14:37:12.413921    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:37:12.413921    6676 main.go:141] libmachine: Starting VM...
	I0612 14:37:12.413921    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-025000
	I0612 14:37:15.441764    6676 main.go:141] libmachine: [stdout =====>] : 
	I0612 14:37:15.441764    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:37:15.441764    6676 main.go:141] libmachine: Waiting for host to start...
	I0612 14:37:15.441764    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 14:37:17.646868    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:37:17.646868    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:37:17.646868    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 14:37:20.144084    6676 main.go:141] libmachine: [stdout =====>] : 
	I0612 14:37:20.144084    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:37:21.151151    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 14:37:23.316696    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:37:23.316696    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:37:23.316696    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 14:37:25.806481    6676 main.go:141] libmachine: [stdout =====>] : 
	I0612 14:37:25.806823    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:37:26.819862    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 14:37:29.008793    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:37:29.018548    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:37:29.018619    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 14:37:31.463591    6676 main.go:141] libmachine: [stdout =====>] : 
	I0612 14:37:31.463842    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:37:32.469046    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 14:37:34.679295    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:37:34.679295    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:37:34.684770    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 14:37:37.159272    6676 main.go:141] libmachine: [stdout =====>] : 
	I0612 14:37:37.159272    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:37:38.161969    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 14:37:40.303447    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:37:40.311350    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:37:40.311350    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 14:37:42.810382    6676 main.go:141] libmachine: [stdout =====>] : 172.23.198.154
	
	I0612 14:37:42.821277    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:37:42.821277    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 14:37:44.817072    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:37:44.817072    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:37:44.828246    6676 machine.go:94] provisionDockerMachine start ...
	I0612 14:37:44.828246    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 14:37:46.884142    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:37:46.884142    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:37:46.884142    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 14:37:49.236470    6676 main.go:141] libmachine: [stdout =====>] : 172.23.198.154
	
	I0612 14:37:49.246896    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:37:49.252596    6676 main.go:141] libmachine: Using SSH client type: native
	I0612 14:37:49.260368    6676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.198.154 22 <nil> <nil>}
	I0612 14:37:49.260368    6676 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 14:37:49.398746    6676 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 14:37:49.398746    6676 buildroot.go:166] provisioning hostname "multinode-025000"
	I0612 14:37:49.399273    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 14:37:51.411010    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:37:51.422309    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:37:51.422309    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 14:37:53.815710    6676 main.go:141] libmachine: [stdout =====>] : 172.23.198.154
	
	I0612 14:37:53.825897    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:37:53.831853    6676 main.go:141] libmachine: Using SSH client type: native
	I0612 14:37:53.832218    6676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.198.154 22 <nil> <nil>}
	I0612 14:37:53.832218    6676 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-025000 && echo "multinode-025000" | sudo tee /etc/hostname
	I0612 14:37:53.989860    6676 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-025000
	
	I0612 14:37:53.989964    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 14:37:56.016964    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:37:56.028247    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:37:56.028388    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 14:37:58.463608    6676 main.go:141] libmachine: [stdout =====>] : 172.23.198.154
	
	I0612 14:37:58.474000    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:37:58.480146    6676 main.go:141] libmachine: Using SSH client type: native
	I0612 14:37:58.480644    6676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.198.154 22 <nil> <nil>}
	I0612 14:37:58.480644    6676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-025000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-025000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-025000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 14:37:58.631035    6676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 14:37:58.631035    6676 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0612 14:37:58.631035    6676 buildroot.go:174] setting up certificates
	I0612 14:37:58.631035    6676 provision.go:84] configureAuth start
	I0612 14:37:58.631581    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 14:38:00.654935    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:38:00.654935    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:38:00.665990    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 14:38:03.022157    6676 main.go:141] libmachine: [stdout =====>] : 172.23.198.154
	
	I0612 14:38:03.032501    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:38:03.032501    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 14:38:05.013890    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:38:05.013890    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:38:05.013890    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 14:38:07.417741    6676 main.go:141] libmachine: [stdout =====>] : 172.23.198.154
	
	I0612 14:38:07.428220    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:38:07.428220    6676 provision.go:143] copyHostCerts
	I0612 14:38:07.428220    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0612 14:38:07.428220    6676 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0612 14:38:07.428220    6676 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0612 14:38:07.429287    6676 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0612 14:38:07.430617    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0612 14:38:07.431021    6676 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0612 14:38:07.431021    6676 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0612 14:38:07.431390    6676 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0612 14:38:07.432635    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0612 14:38:07.432995    6676 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0612 14:38:07.432995    6676 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0612 14:38:07.433077    6676 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0612 14:38:07.434267    6676 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-025000 san=[127.0.0.1 172.23.198.154 localhost minikube multinode-025000]
	I0612 14:38:07.745959    6676 provision.go:177] copyRemoteCerts
	I0612 14:38:07.764251    6676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 14:38:07.764869    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 14:38:09.764222    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:38:09.774450    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:38:09.774450    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 14:38:12.200710    6676 main.go:141] libmachine: [stdout =====>] : 172.23.198.154
	
	I0612 14:38:12.210637    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:38:12.210637    6676 sshutil.go:53] new ssh client: &{IP:172.23.198.154 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000\id_rsa Username:docker}
	I0612 14:38:12.314827    6676 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5505607s)
	I0612 14:38:12.314827    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0612 14:38:12.315669    6676 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 14:38:12.366686    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0612 14:38:12.366686    6676 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0612 14:38:12.414277    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0612 14:38:12.415079    6676 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0612 14:38:12.466156    6676 provision.go:87] duration metric: took 13.8350078s to configureAuth
	I0612 14:38:12.466223    6676 buildroot.go:189] setting minikube options for container-runtime
	I0612 14:38:12.466966    6676 config.go:182] Loaded profile config "multinode-025000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 14:38:12.467028    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 14:38:14.551783    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:38:14.562279    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:38:14.562279    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 14:38:16.980945    6676 main.go:141] libmachine: [stdout =====>] : 172.23.198.154
	
	I0612 14:38:16.980945    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:38:16.998093    6676 main.go:141] libmachine: Using SSH client type: native
	I0612 14:38:16.998265    6676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.198.154 22 <nil> <nil>}
	I0612 14:38:16.998265    6676 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0612 14:38:17.140446    6676 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0612 14:38:17.140556    6676 buildroot.go:70] root file system type: tmpfs
	I0612 14:38:17.140767    6676 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0612 14:38:17.140767    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 14:38:19.203363    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:38:19.203363    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:38:19.203363    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 14:38:21.631448    6676 main.go:141] libmachine: [stdout =====>] : 172.23.198.154
	
	I0612 14:38:21.631568    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:38:21.638880    6676 main.go:141] libmachine: Using SSH client type: native
	I0612 14:38:21.639560    6676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.198.154 22 <nil> <nil>}
	I0612 14:38:21.640084    6676 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0612 14:38:21.802191    6676 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0612 14:38:21.802348    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 14:38:23.839647    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:38:23.850672    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:38:23.850834    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 14:38:26.278033    6676 main.go:141] libmachine: [stdout =====>] : 172.23.198.154
	
	I0612 14:38:26.289221    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:38:26.295301    6676 main.go:141] libmachine: Using SSH client type: native
	I0612 14:38:26.295389    6676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.198.154 22 <nil> <nil>}
	I0612 14:38:26.295389    6676 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0612 14:38:28.401506    6676 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0612 14:38:28.401506    6676 machine.go:97] duration metric: took 43.5731159s to provisionDockerMachine
	I0612 14:38:28.401506    6676 client.go:171] duration metric: took 1m51.7181417s to LocalClient.Create
	I0612 14:38:28.401506    6676 start.go:167] duration metric: took 1m51.7181417s to libmachine.API.Create "multinode-025000"
	I0612 14:38:28.401506    6676 start.go:293] postStartSetup for "multinode-025000" (driver="hyperv")
	I0612 14:38:28.401506    6676 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 14:38:28.414741    6676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 14:38:28.414741    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 14:38:30.458154    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:38:30.468912    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:38:30.468912    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 14:38:32.881625    6676 main.go:141] libmachine: [stdout =====>] : 172.23.198.154
	
	I0612 14:38:32.881625    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:38:32.881625    6676 sshutil.go:53] new ssh client: &{IP:172.23.198.154 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000\id_rsa Username:docker}
	I0612 14:38:32.992699    6676 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5779435s)
	I0612 14:38:33.012934    6676 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 14:38:33.015676    6676 command_runner.go:130] > NAME=Buildroot
	I0612 14:38:33.015676    6676 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0612 14:38:33.015676    6676 command_runner.go:130] > ID=buildroot
	I0612 14:38:33.015676    6676 command_runner.go:130] > VERSION_ID=2023.02.9
	I0612 14:38:33.015676    6676 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0612 14:38:33.015676    6676 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 14:38:33.015676    6676 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0612 14:38:33.021208    6676 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0612 14:38:33.021670    6676 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> 12802.pem in /etc/ssl/certs
	I0612 14:38:33.021670    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> /etc/ssl/certs/12802.pem
	I0612 14:38:33.033825    6676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 14:38:33.049978    6676 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem --> /etc/ssl/certs/12802.pem (1708 bytes)
	I0612 14:38:33.094681    6676 start.go:296] duration metric: took 4.6931598s for postStartSetup
	I0612 14:38:33.097776    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 14:38:35.123621    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:38:35.134345    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:38:35.134345    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 14:38:37.542306    6676 main.go:141] libmachine: [stdout =====>] : 172.23.198.154
	
	I0612 14:38:37.542306    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:38:37.547433    6676 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\config.json ...
	I0612 14:38:37.550739    6676 start.go:128] duration metric: took 2m0.8741703s to createHost
	I0612 14:38:37.550838    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 14:38:39.577011    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:38:39.587512    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:38:39.587512    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 14:38:41.984275    6676 main.go:141] libmachine: [stdout =====>] : 172.23.198.154
	
	I0612 14:38:41.995835    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:38:42.000759    6676 main.go:141] libmachine: Using SSH client type: native
	I0612 14:38:42.000981    6676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.198.154 22 <nil> <nil>}
	I0612 14:38:42.000981    6676 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 14:38:42.131423    6676 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718228322.129590931
	
	I0612 14:38:42.131423    6676 fix.go:216] guest clock: 1718228322.129590931
	I0612 14:38:42.131423    6676 fix.go:229] Guest: 2024-06-12 14:38:42.129590931 -0700 PDT Remote: 2024-06-12 14:38:37.5507394 -0700 PDT m=+126.287384401 (delta=4.578851531s)
	I0612 14:38:42.131574    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 14:38:44.207102    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:38:44.207102    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:38:44.207102    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 14:38:46.606831    6676 main.go:141] libmachine: [stdout =====>] : 172.23.198.154
	
	I0612 14:38:46.618080    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:38:46.623696    6676 main.go:141] libmachine: Using SSH client type: native
	I0612 14:38:46.623696    6676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.198.154 22 <nil> <nil>}
	I0612 14:38:46.624277    6676 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1718228322
	I0612 14:38:46.766225    6676 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jun 12 21:38:42 UTC 2024
	
	I0612 14:38:46.766354    6676 fix.go:236] clock set: Wed Jun 12 21:38:42 UTC 2024
	 (err=<nil>)
	I0612 14:38:46.766354    6676 start.go:83] releasing machines lock for "multinode-025000", held for 2m10.0899462s
	I0612 14:38:46.766517    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 14:38:48.779599    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:38:48.790484    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:38:48.790484    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 14:38:51.260440    6676 main.go:141] libmachine: [stdout =====>] : 172.23.198.154
	
	I0612 14:38:51.272694    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:38:51.276488    6676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 14:38:51.276552    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 14:38:51.290212    6676 ssh_runner.go:195] Run: cat /version.json
	I0612 14:38:51.290212    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 14:38:53.414278    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:38:53.414278    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:38:53.414278    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 14:38:53.415148    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:38:53.415313    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:38:53.415313    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 14:38:56.003958    6676 main.go:141] libmachine: [stdout =====>] : 172.23.198.154
	
	I0612 14:38:56.003958    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:38:56.011321    6676 sshutil.go:53] new ssh client: &{IP:172.23.198.154 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000\id_rsa Username:docker}
	I0612 14:38:56.032956    6676 main.go:141] libmachine: [stdout =====>] : 172.23.198.154
	
	I0612 14:38:56.033594    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:38:56.033658    6676 sshutil.go:53] new ssh client: &{IP:172.23.198.154 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000\id_rsa Username:docker}
	I0612 14:38:56.100357    6676 command_runner.go:130] > {"iso_version": "v1.33.1-1718047936-19044", "kicbase_version": "v0.0.44-1718016726-19044", "minikube_version": "v1.33.1", "commit": "8a07c05cb41cba41fd6bf6981cdae9c899c82330"}
	I0612 14:38:56.104673    6676 ssh_runner.go:235] Completed: cat /version.json: (4.8144449s)
	I0612 14:38:56.115790    6676 ssh_runner.go:195] Run: systemctl --version
	I0612 14:38:56.180593    6676 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0612 14:38:56.180593    6676 command_runner.go:130] > systemd 252 (252)
	I0612 14:38:56.180593    6676 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9040885s)
	I0612 14:38:56.180593    6676 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0612 14:38:56.191439    6676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0612 14:38:56.195914    6676 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0612 14:38:56.201296    6676 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 14:38:56.211967    6676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 14:38:56.239588    6676 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0612 14:38:56.239655    6676 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 14:38:56.239655    6676 start.go:494] detecting cgroup driver to use...
	I0612 14:38:56.239655    6676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 14:38:56.273338    6676 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0612 14:38:56.285230    6676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0612 14:38:56.318733    6676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0612 14:38:56.337742    6676 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0612 14:38:56.349588    6676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0612 14:38:56.378846    6676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0612 14:38:56.412584    6676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0612 14:38:56.441926    6676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0612 14:38:56.474132    6676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 14:38:56.503328    6676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0612 14:38:56.535984    6676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0612 14:38:56.565494    6676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0612 14:38:56.595981    6676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 14:38:56.604456    6676 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0612 14:38:56.625099    6676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 14:38:56.654478    6676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 14:38:56.835569    6676 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0612 14:38:56.863839    6676 start.go:494] detecting cgroup driver to use...
	I0612 14:38:56.877116    6676 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0612 14:38:56.899161    6676 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0612 14:38:56.899161    6676 command_runner.go:130] > [Unit]
	I0612 14:38:56.899161    6676 command_runner.go:130] > Description=Docker Application Container Engine
	I0612 14:38:56.899161    6676 command_runner.go:130] > Documentation=https://docs.docker.com
	I0612 14:38:56.899161    6676 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0612 14:38:56.899161    6676 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0612 14:38:56.899161    6676 command_runner.go:130] > StartLimitBurst=3
	I0612 14:38:56.899161    6676 command_runner.go:130] > StartLimitIntervalSec=60
	I0612 14:38:56.899161    6676 command_runner.go:130] > [Service]
	I0612 14:38:56.899161    6676 command_runner.go:130] > Type=notify
	I0612 14:38:56.899161    6676 command_runner.go:130] > Restart=on-failure
	I0612 14:38:56.899161    6676 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0612 14:38:56.899161    6676 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0612 14:38:56.899161    6676 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0612 14:38:56.899161    6676 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0612 14:38:56.899161    6676 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0612 14:38:56.899161    6676 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0612 14:38:56.899161    6676 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0612 14:38:56.899161    6676 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0612 14:38:56.899161    6676 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0612 14:38:56.899161    6676 command_runner.go:130] > ExecStart=
	I0612 14:38:56.899161    6676 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0612 14:38:56.899696    6676 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0612 14:38:56.899696    6676 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0612 14:38:56.899696    6676 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0612 14:38:56.899696    6676 command_runner.go:130] > LimitNOFILE=infinity
	I0612 14:38:56.899696    6676 command_runner.go:130] > LimitNPROC=infinity
	I0612 14:38:56.899696    6676 command_runner.go:130] > LimitCORE=infinity
	I0612 14:38:56.899696    6676 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0612 14:38:56.899696    6676 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0612 14:38:56.899805    6676 command_runner.go:130] > TasksMax=infinity
	I0612 14:38:56.899805    6676 command_runner.go:130] > TimeoutStartSec=0
	I0612 14:38:56.899805    6676 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0612 14:38:56.899805    6676 command_runner.go:130] > Delegate=yes
	I0612 14:38:56.899805    6676 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0612 14:38:56.899805    6676 command_runner.go:130] > KillMode=process
	I0612 14:38:56.899855    6676 command_runner.go:130] > [Install]
	I0612 14:38:56.899855    6676 command_runner.go:130] > WantedBy=multi-user.target
	I0612 14:38:56.912825    6676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 14:38:56.945766    6676 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 14:38:56.990341    6676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 14:38:57.021745    6676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0612 14:38:57.053722    6676 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0612 14:38:57.112790    6676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0612 14:38:57.136703    6676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 14:38:57.166709    6676 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0612 14:38:57.179054    6676 ssh_runner.go:195] Run: which cri-dockerd
	I0612 14:38:57.185033    6676 command_runner.go:130] > /usr/bin/cri-dockerd
	I0612 14:38:57.196152    6676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0612 14:38:57.212595    6676 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0612 14:38:57.254352    6676 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0612 14:38:57.441630    6676 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0612 14:38:57.617470    6676 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0612 14:38:57.617470    6676 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0612 14:38:57.656417    6676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 14:38:57.844062    6676 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0612 14:39:00.331011    6676 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4869402s)
	I0612 14:39:00.346068    6676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0612 14:39:00.379012    6676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0612 14:39:00.411975    6676 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0612 14:39:00.587468    6676 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0612 14:39:00.779001    6676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 14:39:00.963081    6676 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0612 14:39:01.005324    6676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0612 14:39:01.038866    6676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 14:39:01.221140    6676 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0612 14:39:01.323060    6676 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0612 14:39:01.334307    6676 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0612 14:39:01.344833    6676 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0612 14:39:01.344833    6676 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0612 14:39:01.344833    6676 command_runner.go:130] > Device: 0,22	Inode: 881         Links: 1
	I0612 14:39:01.344833    6676 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0612 14:39:01.344833    6676 command_runner.go:130] > Access: 2024-06-12 21:39:01.238519976 +0000
	I0612 14:39:01.344833    6676 command_runner.go:130] > Modify: 2024-06-12 21:39:01.238519976 +0000
	I0612 14:39:01.344833    6676 command_runner.go:130] > Change: 2024-06-12 21:39:01.242519932 +0000
	I0612 14:39:01.344833    6676 command_runner.go:130] >  Birth: -
	I0612 14:39:01.344833    6676 start.go:562] Will wait 60s for crictl version
	I0612 14:39:01.357255    6676 ssh_runner.go:195] Run: which crictl
	I0612 14:39:01.360346    6676 command_runner.go:130] > /usr/bin/crictl
	I0612 14:39:01.365965    6676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 14:39:01.422311    6676 command_runner.go:130] > Version:  0.1.0
	I0612 14:39:01.422366    6676 command_runner.go:130] > RuntimeName:  docker
	I0612 14:39:01.422403    6676 command_runner.go:130] > RuntimeVersion:  26.1.4
	I0612 14:39:01.422403    6676 command_runner.go:130] > RuntimeApiVersion:  v1
	I0612 14:39:01.422403    6676 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0612 14:39:01.432411    6676 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0612 14:39:01.460227    6676 command_runner.go:130] > 26.1.4
	I0612 14:39:01.471117    6676 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0612 14:39:01.503689    6676 command_runner.go:130] > 26.1.4
	I0612 14:39:01.504409    6676 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0612 14:39:01.504409    6676 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0612 14:39:01.511883    6676 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0612 14:39:01.511883    6676 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0612 14:39:01.511883    6676 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0612 14:39:01.511977    6676 ip.go:207] Found interface: {Index:16 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:56:a0:18 Flags:up|broadcast|multicast|running}
	I0612 14:39:01.515216    6676 ip.go:210] interface addr: fe80::52c5:dd8:dd1e:a400/64
	I0612 14:39:01.515216    6676 ip.go:210] interface addr: 172.23.192.1/20
	I0612 14:39:01.526426    6676 ssh_runner.go:195] Run: grep 172.23.192.1	host.minikube.internal$ /etc/hosts
	I0612 14:39:01.531878    6676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 14:39:01.551229    6676 kubeadm.go:877] updating cluster {Name:multinode-025000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.1 ClusterName:multinode-025000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.198.154 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 14:39:01.551871    6676 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0612 14:39:01.561004    6676 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0612 14:39:01.581565    6676 docker.go:685] Got preloaded images: 
	I0612 14:39:01.581565    6676 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0612 14:39:01.594275    6676 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0612 14:39:01.612811    6676 command_runner.go:139] > {"Repositories":{}}
	I0612 14:39:01.625970    6676 ssh_runner.go:195] Run: which lz4
	I0612 14:39:01.632095    6676 command_runner.go:130] > /usr/bin/lz4
	I0612 14:39:01.632095    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0612 14:39:01.643258    6676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0612 14:39:01.651181    6676 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0612 14:39:01.651181    6676 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0612 14:39:01.651181    6676 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0612 14:39:03.640891    6676 docker.go:649] duration metric: took 2.0079621s to copy over tarball
	I0612 14:39:03.651978    6676 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0612 14:39:12.883358    6676 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (9.230744s)
	I0612 14:39:12.883454    6676 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0612 14:39:12.940586    6676 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0612 14:39:12.950984    6676 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.1":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.1":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.1":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b
71dc0af879883cd"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.1":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0612 14:39:12.961314    6676 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0612 14:39:13.004185    6676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 14:39:13.212567    6676 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0612 14:39:16.073535    6676 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.8609584s)
	I0612 14:39:16.083616    6676 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0612 14:39:16.110372    6676 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0612 14:39:16.111349    6676 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0612 14:39:16.111349    6676 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 14:39:16.111349    6676 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0612 14:39:16.111349    6676 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0612 14:39:16.111349    6676 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0612 14:39:16.111349    6676 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0612 14:39:16.111349    6676 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 14:39:16.111349    6676 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0612 14:39:16.111493    6676 cache_images.go:84] Images are preloaded, skipping loading
	I0612 14:39:16.111493    6676 kubeadm.go:928] updating node { 172.23.198.154 8443 v1.30.1 docker true true} ...
	I0612 14:39:16.111712    6676 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-025000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.198.154
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-025000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 14:39:16.120921    6676 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0612 14:39:16.155247    6676 command_runner.go:130] > cgroupfs
	I0612 14:39:16.155247    6676 cni.go:84] Creating CNI manager for ""
	I0612 14:39:16.155247    6676 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0612 14:39:16.155247    6676 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 14:39:16.155247    6676 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.23.198.154 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-025000 NodeName:multinode-025000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.23.198.154"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.23.198.154 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 14:39:16.155247    6676 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.23.198.154
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-025000"
	  kubeletExtraArgs:
	    node-ip: 172.23.198.154
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.23.198.154"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 14:39:16.168230    6676 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 14:39:16.186104    6676 command_runner.go:130] > kubeadm
	I0612 14:39:16.186104    6676 command_runner.go:130] > kubectl
	I0612 14:39:16.186181    6676 command_runner.go:130] > kubelet
	I0612 14:39:16.186181    6676 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 14:39:16.197769    6676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 14:39:16.218230    6676 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0612 14:39:16.250800    6676 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 14:39:16.277979    6676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0612 14:39:16.317718    6676 ssh_runner.go:195] Run: grep 172.23.198.154	control-plane.minikube.internal$ /etc/hosts
	I0612 14:39:16.324015    6676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.198.154	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 14:39:16.359841    6676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 14:39:16.542817    6676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 14:39:16.568600    6676 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000 for IP: 172.23.198.154
	I0612 14:39:16.568600    6676 certs.go:194] generating shared ca certs ...
	I0612 14:39:16.568670    6676 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 14:39:16.569497    6676 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0612 14:39:16.569913    6676 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0612 14:39:16.570217    6676 certs.go:256] generating profile certs ...
	I0612 14:39:16.570966    6676 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\client.key
	I0612 14:39:16.571086    6676 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\client.crt with IP's: []
	I0612 14:39:16.747505    6676 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\client.crt ...
	I0612 14:39:16.747505    6676 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\client.crt: {Name:mk46e90bacae60f7774b8c33c7e656b827adf83c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 14:39:16.754100    6676 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\client.key ...
	I0612 14:39:16.754100    6676 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\client.key: {Name:mk67ca7ab539aa687731581f5038ebbe102f1300 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 14:39:16.755882    6676 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.key.b071eefe
	I0612 14:39:16.755882    6676 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.crt.b071eefe with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.23.198.154]
	I0612 14:39:16.985259    6676 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.crt.b071eefe ...
	I0612 14:39:16.985259    6676 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.crt.b071eefe: {Name:mk1417bb5afce7755ec96dfad0f2abcada4f9f68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 14:39:16.995721    6676 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.key.b071eefe ...
	I0612 14:39:16.995721    6676 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.key.b071eefe: {Name:mkf56fd26cb9346269e55088146c49dc0887a63b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 14:39:16.996052    6676 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.crt.b071eefe -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.crt
	I0612 14:39:17.006254    6676 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.key.b071eefe -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.key
	I0612 14:39:17.008777    6676 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\proxy-client.key
	I0612 14:39:17.010160    6676 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\proxy-client.crt with IP's: []
	I0612 14:39:17.181903    6676 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\proxy-client.crt ...
	I0612 14:39:17.181903    6676 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\proxy-client.crt: {Name:mk0fe79018cc6e23f31e49c89484dbaf8ecb85af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 14:39:17.188669    6676 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\proxy-client.key ...
	I0612 14:39:17.188669    6676 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\proxy-client.key: {Name:mk3efd174196fdadeba8c7c89ceb0f9fc72b9c42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 14:39:17.189678    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0612 14:39:17.190834    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0612 14:39:17.191026    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0612 14:39:17.191267    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0612 14:39:17.191267    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0612 14:39:17.191267    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0612 14:39:17.191267    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0612 14:39:17.192620    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0612 14:39:17.202737    6676 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem (1338 bytes)
	W0612 14:39:17.203323    6676 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280_empty.pem, impossibly tiny 0 bytes
	I0612 14:39:17.203411    6676 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0612 14:39:17.203663    6676 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0612 14:39:17.204109    6676 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0612 14:39:17.204423    6676 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0612 14:39:17.204955    6676 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem (1708 bytes)
	I0612 14:39:17.205234    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> /usr/share/ca-certificates/12802.pem
	I0612 14:39:17.205432    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0612 14:39:17.205646    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem -> /usr/share/ca-certificates/1280.pem
	I0612 14:39:17.206933    6676 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 14:39:17.252026    6676 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 14:39:17.294081    6676 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 14:39:17.342132    6676 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0612 14:39:17.387976    6676 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0612 14:39:17.442254    6676 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0612 14:39:17.491864    6676 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 14:39:17.537824    6676 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 14:39:17.582420    6676 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem --> /usr/share/ca-certificates/12802.pem (1708 bytes)
	I0612 14:39:17.618459    6676 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 14:39:17.671290    6676 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem --> /usr/share/ca-certificates/1280.pem (1338 bytes)
	I0612 14:39:17.714536    6676 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 14:39:17.756436    6676 ssh_runner.go:195] Run: openssl version
	I0612 14:39:17.776561    6676 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0612 14:39:17.788693    6676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 14:39:17.822927    6676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 14:39:17.830323    6676 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 12 20:00 /usr/share/ca-certificates/minikubeCA.pem
	I0612 14:39:17.830474    6676 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:00 /usr/share/ca-certificates/minikubeCA.pem
	I0612 14:39:17.841977    6676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 14:39:17.851602    6676 command_runner.go:130] > b5213941
	I0612 14:39:17.863822    6676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 14:39:17.896567    6676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1280.pem && ln -fs /usr/share/ca-certificates/1280.pem /etc/ssl/certs/1280.pem"
	I0612 14:39:17.930892    6676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1280.pem
	I0612 14:39:17.938278    6676 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 12 20:15 /usr/share/ca-certificates/1280.pem
	I0612 14:39:17.938278    6676 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:15 /usr/share/ca-certificates/1280.pem
	I0612 14:39:17.951884    6676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1280.pem
	I0612 14:39:17.959979    6676 command_runner.go:130] > 51391683
	I0612 14:39:17.971697    6676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1280.pem /etc/ssl/certs/51391683.0"
	I0612 14:39:18.004208    6676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12802.pem && ln -fs /usr/share/ca-certificates/12802.pem /etc/ssl/certs/12802.pem"
	I0612 14:39:18.034629    6676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12802.pem
	I0612 14:39:18.040677    6676 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 12 20:15 /usr/share/ca-certificates/12802.pem
	I0612 14:39:18.042030    6676 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:15 /usr/share/ca-certificates/12802.pem
	I0612 14:39:18.054288    6676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12802.pem
	I0612 14:39:18.057229    6676 command_runner.go:130] > 3ec20f2e
	I0612 14:39:18.074509    6676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12802.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 14:39:18.105676    6676 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 14:39:18.113233    6676 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0612 14:39:18.113233    6676 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0612 14:39:18.113574    6676 kubeadm.go:391] StartCluster: {Name:multinode-025000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:multinode-025000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.198.154 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 14:39:18.124534    6676 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0612 14:39:18.159034    6676 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0612 14:39:18.175890    6676 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0612 14:39:18.175965    6676 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0612 14:39:18.175965    6676 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0612 14:39:18.187844    6676 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 14:39:18.216709    6676 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 14:39:18.225264    6676 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0612 14:39:18.225264    6676 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0612 14:39:18.225264    6676 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0612 14:39:18.225264    6676 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 14:39:18.234794    6676 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 14:39:18.234794    6676 kubeadm.go:156] found existing configuration files:
	
	I0612 14:39:18.245514    6676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 14:39:18.253767    6676 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 14:39:18.261424    6676 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 14:39:18.273161    6676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 14:39:18.301790    6676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 14:39:18.314433    6676 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 14:39:18.318793    6676 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 14:39:18.331125    6676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 14:39:18.359174    6676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 14:39:18.362324    6676 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 14:39:18.376665    6676 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 14:39:18.387548    6676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 14:39:18.420271    6676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 14:39:18.422860    6676 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 14:39:18.439700    6676 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 14:39:18.455191    6676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 14:39:18.474402    6676 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0612 14:39:18.907201    6676 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 14:39:18.907331    6676 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 14:39:31.856665    6676 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0612 14:39:31.856665    6676 command_runner.go:130] > [init] Using Kubernetes version: v1.30.1
	I0612 14:39:31.856872    6676 kubeadm.go:309] [preflight] Running pre-flight checks
	I0612 14:39:31.856930    6676 command_runner.go:130] > [preflight] Running pre-flight checks
	I0612 14:39:31.857012    6676 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 14:39:31.857141    6676 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0612 14:39:31.857365    6676 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 14:39:31.857365    6676 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0612 14:39:31.857576    6676 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 14:39:31.857576    6676 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0612 14:39:31.857785    6676 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 14:39:31.857785    6676 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 14:39:31.860654    6676 out.go:204]   - Generating certificates and keys ...
	I0612 14:39:31.860999    6676 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0612 14:39:31.861058    6676 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0612 14:39:31.861197    6676 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0612 14:39:31.861266    6676 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0612 14:39:31.861397    6676 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0612 14:39:31.861435    6676 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0612 14:39:31.861513    6676 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0612 14:39:31.861513    6676 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0612 14:39:31.861661    6676 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0612 14:39:31.861699    6676 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0612 14:39:31.861729    6676 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0612 14:39:31.861729    6676 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0612 14:39:31.861729    6676 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0612 14:39:31.861729    6676 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0612 14:39:31.862356    6676 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-025000] and IPs [172.23.198.154 127.0.0.1 ::1]
	I0612 14:39:31.862450    6676 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-025000] and IPs [172.23.198.154 127.0.0.1 ::1]
	I0612 14:39:31.862613    6676 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0612 14:39:31.862613    6676 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0612 14:39:31.862936    6676 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-025000] and IPs [172.23.198.154 127.0.0.1 ::1]
	I0612 14:39:31.863001    6676 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-025000] and IPs [172.23.198.154 127.0.0.1 ::1]
	I0612 14:39:31.863262    6676 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0612 14:39:31.863317    6676 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0612 14:39:31.863545    6676 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0612 14:39:31.863636    6676 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0612 14:39:31.863636    6676 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0612 14:39:31.863636    6676 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0612 14:39:31.863636    6676 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 14:39:31.863636    6676 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 14:39:31.863636    6676 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 14:39:31.863636    6676 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 14:39:31.863636    6676 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0612 14:39:31.863636    6676 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0612 14:39:31.864292    6676 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 14:39:31.864396    6676 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 14:39:31.864544    6676 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 14:39:31.864544    6676 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 14:39:31.864805    6676 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 14:39:31.864805    6676 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 14:39:31.864805    6676 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 14:39:31.864805    6676 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 14:39:31.864805    6676 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 14:39:31.868629    6676 out.go:204]   - Booting up control plane ...
	I0612 14:39:31.864805    6676 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 14:39:31.868629    6676 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 14:39:31.868629    6676 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 14:39:31.868629    6676 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 14:39:31.869777    6676 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 14:39:31.869777    6676 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 14:39:31.869937    6676 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 14:39:31.870227    6676 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 14:39:31.870227    6676 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 14:39:31.870227    6676 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 14:39:31.870227    6676 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 14:39:31.870580    6676 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0612 14:39:31.870653    6676 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0612 14:39:31.870835    6676 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0612 14:39:31.870835    6676 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0612 14:39:31.870835    6676 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0612 14:39:31.870835    6676 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0612 14:39:31.870835    6676 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001765512s
	I0612 14:39:31.870835    6676 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001765512s
	I0612 14:39:31.871573    6676 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0612 14:39:31.871573    6676 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0612 14:39:31.871785    6676 kubeadm.go:309] [api-check] The API server is healthy after 6.002314531s
	I0612 14:39:31.871785    6676 command_runner.go:130] > [api-check] The API server is healthy after 6.002314531s
	I0612 14:39:31.872005    6676 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0612 14:39:31.872005    6676 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0612 14:39:31.872307    6676 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0612 14:39:31.872307    6676 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0612 14:39:31.872307    6676 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0612 14:39:31.872307    6676 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0612 14:39:31.872749    6676 command_runner.go:130] > [mark-control-plane] Marking the node multinode-025000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0612 14:39:31.872749    6676 kubeadm.go:309] [mark-control-plane] Marking the node multinode-025000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0612 14:39:31.872749    6676 kubeadm.go:309] [bootstrap-token] Using token: 10or2i.mehqenyf67aq8068
	I0612 14:39:31.880341    6676 out.go:204]   - Configuring RBAC rules ...
	I0612 14:39:31.872749    6676 command_runner.go:130] > [bootstrap-token] Using token: 10or2i.mehqenyf67aq8068
	I0612 14:39:31.881913    6676 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0612 14:39:31.881913    6676 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0612 14:39:31.882031    6676 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0612 14:39:31.882031    6676 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0612 14:39:31.882031    6676 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0612 14:39:31.882031    6676 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0612 14:39:31.882555    6676 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0612 14:39:31.882612    6676 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0612 14:39:31.882612    6676 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0612 14:39:31.882612    6676 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0612 14:39:31.882612    6676 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0612 14:39:31.882612    6676 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0612 14:39:31.882612    6676 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0612 14:39:31.882612    6676 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0612 14:39:31.883481    6676 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0612 14:39:31.883481    6676 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0612 14:39:31.883611    6676 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0612 14:39:31.883696    6676 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0612 14:39:31.883696    6676 kubeadm.go:309] 
	I0612 14:39:31.883865    6676 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0612 14:39:31.883898    6676 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0612 14:39:31.883898    6676 kubeadm.go:309] 
	I0612 14:39:31.884054    6676 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0612 14:39:31.884054    6676 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0612 14:39:31.884054    6676 kubeadm.go:309] 
	I0612 14:39:31.884277    6676 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0612 14:39:31.884333    6676 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0612 14:39:31.884480    6676 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0612 14:39:31.884513    6676 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0612 14:39:31.884605    6676 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0612 14:39:31.884605    6676 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0612 14:39:31.884605    6676 kubeadm.go:309] 
	I0612 14:39:31.884605    6676 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0612 14:39:31.884605    6676 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0612 14:39:31.884605    6676 kubeadm.go:309] 
	I0612 14:39:31.884605    6676 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0612 14:39:31.884605    6676 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0612 14:39:31.884605    6676 kubeadm.go:309] 
	I0612 14:39:31.884605    6676 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0612 14:39:31.885156    6676 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0612 14:39:31.885240    6676 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0612 14:39:31.885240    6676 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0612 14:39:31.885240    6676 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0612 14:39:31.885240    6676 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0612 14:39:31.885240    6676 kubeadm.go:309] 
	I0612 14:39:31.885240    6676 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0612 14:39:31.885240    6676 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0612 14:39:31.886083    6676 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0612 14:39:31.886083    6676 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0612 14:39:31.886083    6676 kubeadm.go:309] 
	I0612 14:39:31.886480    6676 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 10or2i.mehqenyf67aq8068 \
	I0612 14:39:31.886530    6676 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 10or2i.mehqenyf67aq8068 \
	I0612 14:39:31.886645    6676 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:10c04e0412ada9d72a46398cbb6ecb6de5efcad2d747fb615b7e984406c55dc5 \
	I0612 14:39:31.886645    6676 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:10c04e0412ada9d72a46398cbb6ecb6de5efcad2d747fb615b7e984406c55dc5 \
	I0612 14:39:31.886645    6676 command_runner.go:130] > 	--control-plane 
	I0612 14:39:31.886645    6676 kubeadm.go:309] 	--control-plane 
	I0612 14:39:31.886645    6676 kubeadm.go:309] 
	I0612 14:39:31.886645    6676 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0612 14:39:31.886645    6676 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0612 14:39:31.886645    6676 kubeadm.go:309] 
	I0612 14:39:31.887300    6676 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 10or2i.mehqenyf67aq8068 \
	I0612 14:39:31.887300    6676 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 10or2i.mehqenyf67aq8068 \
	I0612 14:39:31.887536    6676 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:10c04e0412ada9d72a46398cbb6ecb6de5efcad2d747fb615b7e984406c55dc5 
	I0612 14:39:31.887536    6676 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:10c04e0412ada9d72a46398cbb6ecb6de5efcad2d747fb615b7e984406c55dc5 
	I0612 14:39:31.887536    6676 cni.go:84] Creating CNI manager for ""
	I0612 14:39:31.887536    6676 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0612 14:39:31.890578    6676 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0612 14:39:31.910287    6676 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0612 14:39:31.921118    6676 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0612 14:39:31.921719    6676 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0612 14:39:31.921719    6676 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0612 14:39:31.921719    6676 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0612 14:39:31.921719    6676 command_runner.go:130] > Access: 2024-06-12 21:37:39.814590300 +0000
	I0612 14:39:31.921806    6676 command_runner.go:130] > Modify: 2024-06-11 01:01:29.000000000 +0000
	I0612 14:39:31.921806    6676 command_runner.go:130] > Change: 2024-06-12 14:37:31.341000000 +0000
	I0612 14:39:31.921806    6676 command_runner.go:130] >  Birth: -
	I0612 14:39:31.921875    6676 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0612 14:39:31.921941    6676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0612 14:39:31.969361    6676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0612 14:39:32.397732    6676 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0612 14:39:32.397819    6676 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0612 14:39:32.397819    6676 command_runner.go:130] > serviceaccount/kindnet created
	I0612 14:39:32.397819    6676 command_runner.go:130] > daemonset.apps/kindnet created
	I0612 14:39:32.397819    6676 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0612 14:39:32.412162    6676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-025000 minikube.k8s.io/updated_at=2024_06_12T14_39_32_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97 minikube.k8s.io/name=multinode-025000 minikube.k8s.io/primary=true
	I0612 14:39:32.412162    6676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 14:39:32.429693    6676 command_runner.go:130] > -16
	I0612 14:39:32.429773    6676 ops.go:34] apiserver oom_adj: -16
	I0612 14:39:32.657048    6676 command_runner.go:130] > node/multinode-025000 labeled
	I0612 14:39:32.660233    6676 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0612 14:39:32.673919    6676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 14:39:32.820321    6676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0612 14:39:33.174216    6676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 14:39:33.267460    6676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0612 14:39:33.684197    6676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 14:39:33.783949    6676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0612 14:39:34.186920    6676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 14:39:34.281248    6676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0612 14:39:34.678035    6676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 14:39:34.774142    6676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0612 14:39:35.175388    6676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 14:39:35.277162    6676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0612 14:39:35.675767    6676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 14:39:35.792432    6676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0612 14:39:36.171657    6676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 14:39:36.265089    6676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0612 14:39:36.677497    6676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 14:39:36.780764    6676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0612 14:39:37.180721    6676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 14:39:37.280641    6676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0612 14:39:37.681207    6676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 14:39:37.782232    6676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0612 14:39:38.185930    6676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 14:39:38.290029    6676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0612 14:39:38.683868    6676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 14:39:38.782891    6676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0612 14:39:39.184084    6676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 14:39:39.282084    6676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0612 14:39:39.676997    6676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 14:39:39.770459    6676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0612 14:39:40.182584    6676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 14:39:40.276819    6676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0612 14:39:40.677636    6676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 14:39:40.770996    6676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0612 14:39:41.186406    6676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 14:39:41.282368    6676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0612 14:39:41.688292    6676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 14:39:41.780828    6676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0612 14:39:42.176801    6676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 14:39:42.270257    6676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0612 14:39:42.679088    6676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 14:39:42.782929    6676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0612 14:39:43.189438    6676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 14:39:43.320348    6676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0612 14:39:43.688624    6676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 14:39:43.779258    6676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0612 14:39:44.176332    6676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 14:39:44.272683    6676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0612 14:39:44.681871    6676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 14:39:44.777947    6676 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0612 14:39:45.188429    6676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0612 14:39:45.305184    6676 command_runner.go:130] > NAME      SECRETS   AGE
	I0612 14:39:45.305253    6676 command_runner.go:130] > default   0         0s
	I0612 14:39:45.305253    6676 kubeadm.go:1107] duration metric: took 12.907393s to wait for elevateKubeSystemPrivileges
	W0612 14:39:45.305253    6676 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0612 14:39:45.305253    6676 kubeadm.go:393] duration metric: took 27.1915912s to StartCluster
	I0612 14:39:45.305441    6676 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 14:39:45.305663    6676 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 14:39:45.307599    6676 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 14:39:45.309708    6676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0612 14:39:45.309708    6676 start.go:234] Will wait 6m0s for node &{Name: IP:172.23.198.154 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0612 14:39:45.315052    6676 out.go:177] * Verifying Kubernetes components...
	I0612 14:39:45.309708    6676 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0612 14:39:45.310381    6676 config.go:182] Loaded profile config "multinode-025000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 14:39:45.315099    6676 addons.go:69] Setting storage-provisioner=true in profile "multinode-025000"
	I0612 14:39:45.315099    6676 addons.go:69] Setting default-storageclass=true in profile "multinode-025000"
	I0612 14:39:45.318947    6676 addons.go:234] Setting addon storage-provisioner=true in "multinode-025000"
	I0612 14:39:45.318947    6676 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-025000"
	I0612 14:39:45.318947    6676 host.go:66] Checking if "multinode-025000" exists ...
	I0612 14:39:45.318947    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 14:39:45.320155    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 14:39:45.330892    6676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 14:39:45.645867    6676 command_runner.go:130] > apiVersion: v1
	I0612 14:39:45.645938    6676 command_runner.go:130] > data:
	I0612 14:39:45.645938    6676 command_runner.go:130] >   Corefile: |
	I0612 14:39:45.645938    6676 command_runner.go:130] >     .:53 {
	I0612 14:39:45.645938    6676 command_runner.go:130] >         errors
	I0612 14:39:45.646002    6676 command_runner.go:130] >         health {
	I0612 14:39:45.646002    6676 command_runner.go:130] >            lameduck 5s
	I0612 14:39:45.646002    6676 command_runner.go:130] >         }
	I0612 14:39:45.646002    6676 command_runner.go:130] >         ready
	I0612 14:39:45.646002    6676 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0612 14:39:45.646002    6676 command_runner.go:130] >            pods insecure
	I0612 14:39:45.646002    6676 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0612 14:39:45.646065    6676 command_runner.go:130] >            ttl 30
	I0612 14:39:45.646065    6676 command_runner.go:130] >         }
	I0612 14:39:45.646065    6676 command_runner.go:130] >         prometheus :9153
	I0612 14:39:45.646065    6676 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0612 14:39:45.646132    6676 command_runner.go:130] >            max_concurrent 1000
	I0612 14:39:45.646132    6676 command_runner.go:130] >         }
	I0612 14:39:45.646132    6676 command_runner.go:130] >         cache 30
	I0612 14:39:45.646132    6676 command_runner.go:130] >         loop
	I0612 14:39:45.646132    6676 command_runner.go:130] >         reload
	I0612 14:39:45.646132    6676 command_runner.go:130] >         loadbalance
	I0612 14:39:45.646132    6676 command_runner.go:130] >     }
	I0612 14:39:45.646132    6676 command_runner.go:130] > kind: ConfigMap
	I0612 14:39:45.646195    6676 command_runner.go:130] > metadata:
	I0612 14:39:45.646195    6676 command_runner.go:130] >   creationTimestamp: "2024-06-12T21:39:31Z"
	I0612 14:39:45.646195    6676 command_runner.go:130] >   name: coredns
	I0612 14:39:45.646195    6676 command_runner.go:130] >   namespace: kube-system
	I0612 14:39:45.646195    6676 command_runner.go:130] >   resourceVersion: "256"
	I0612 14:39:45.646195    6676 command_runner.go:130] >   uid: acc342ed-c70c-4158-b619-ac292639c507
	I0612 14:39:45.646489    6676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.23.192.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0612 14:39:45.773924    6676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 14:39:46.155228    6676 command_runner.go:130] > configmap/coredns replaced
	I0612 14:39:46.167804    6676 start.go:946] {"host.minikube.internal": 172.23.192.1} host record injected into CoreDNS's ConfigMap
	I0612 14:39:46.168726    6676 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 14:39:46.168726    6676 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 14:39:46.169160    6676 kapi.go:59] client config for multinode-025000: &rest.Config{Host:"https://172.23.198.154:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-025000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-025000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x288e1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0612 14:39:46.169160    6676 kapi.go:59] client config for multinode-025000: &rest.Config{Host:"https://172.23.198.154:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-025000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-025000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x288e1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0612 14:39:46.171015    6676 cert_rotation.go:137] Starting client certificate rotation controller
	I0612 14:39:46.171462    6676 node_ready.go:35] waiting up to 6m0s for node "multinode-025000" to be "Ready" ...
	I0612 14:39:46.171648    6676 round_trippers.go:463] GET https://172.23.198.154:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0612 14:39:46.171718    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:46.171718    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:46.171648    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:39:46.171718    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:46.171718    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:46.171801    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:46.171801    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:46.191129    6676 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0612 14:39:46.194327    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:46.194327    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:46.194327    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:46.194327    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:46 GMT
	I0612 14:39:46.194327    6676 round_trippers.go:580]     Audit-Id: c84d72ba-8c5e-41d6-9f36-34713b0f68e2
	I0612 14:39:46.194327    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:46.194327    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:46.194510    6676 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0612 14:39:46.194633    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:46.194633    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:46.194633    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:46.194728    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:46.194728    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:46.194728    6676 round_trippers.go:580]     Content-Length: 291
	I0612 14:39:46.194728    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:46 GMT
	I0612 14:39:46.194728    6676 round_trippers.go:580]     Audit-Id: 539813be-b744-43cc-ba78-8220b35530ad
	I0612 14:39:46.194845    6676 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"20b8c243-5685-473c-968e-fb6cfb4e8949","resourceVersion":"393","creationTimestamp":"2024-06-12T21:39:31Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0612 14:39:46.195210    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"348","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0612 14:39:46.195566    6676 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"20b8c243-5685-473c-968e-fb6cfb4e8949","resourceVersion":"393","creationTimestamp":"2024-06-12T21:39:31Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0612 14:39:46.195609    6676 round_trippers.go:463] PUT https://172.23.198.154:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0612 14:39:46.195609    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:46.195609    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:46.195609    6676 round_trippers.go:473]     Content-Type: application/json
	I0612 14:39:46.195609    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:46.209025    6676 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0612 14:39:46.222322    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:46.222322    6676 round_trippers.go:580]     Audit-Id: 731741d1-52db-4f7f-a18d-3d439c187bb9
	I0612 14:39:46.222322    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:46.222322    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:46.222322    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:46.222322    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:46.222322    6676 round_trippers.go:580]     Content-Length: 291
	I0612 14:39:46.222322    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:46 GMT
	I0612 14:39:46.222457    6676 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"20b8c243-5685-473c-968e-fb6cfb4e8949","resourceVersion":"395","creationTimestamp":"2024-06-12T21:39:31Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0612 14:39:46.691544    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:39:46.691544    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:46.691544    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:46.691544    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:46.691544    6676 round_trippers.go:463] GET https://172.23.198.154:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0612 14:39:46.691544    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:46.691544    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:46.691544    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:46.694258    6676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 14:39:46.694258    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:46.695623    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:46.695623    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:46.695623    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:46.695623    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:46.695623    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:46 GMT
	I0612 14:39:46.695623    6676 round_trippers.go:580]     Audit-Id: 39ef0f18-0798-4b69-8dda-bdc999579630
	I0612 14:39:46.695798    6676 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 14:39:46.695798    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:46.695798    6676 round_trippers.go:580]     Audit-Id: 9711bc8e-150d-4aef-bb4b-80bc67a6ddc2
	I0612 14:39:46.695798    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:46.695942    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:46.695942    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:46.695942    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:46.695942    6676 round_trippers.go:580]     Content-Length: 291
	I0612 14:39:46.695942    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:46 GMT
	I0612 14:39:46.695942    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"348","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0612 14:39:46.696111    6676 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"20b8c243-5685-473c-968e-fb6cfb4e8949","resourceVersion":"405","creationTimestamp":"2024-06-12T21:39:31Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0612 14:39:46.696207    6676 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-025000" context rescaled to 1 replicas
	I0612 14:39:47.189714    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:39:47.189804    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:47.189804    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:47.189804    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:47.193808    6676 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 14:39:47.193808    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:47.193884    6676 round_trippers.go:580]     Audit-Id: ba5f3e28-6f1c-4ee1-bc5c-8916d3f91f1c
	I0612 14:39:47.193884    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:47.193884    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:47.193884    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:47.193884    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:47.193884    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:47 GMT
	I0612 14:39:47.194253    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"348","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0612 14:39:47.625376    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:39:47.628347    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:39:47.628518    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:39:47.628648    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:39:47.631610    6676 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 14:39:47.630070    6676 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 14:39:47.634643    6676 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 14:39:47.634717    6676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0612 14:39:47.634822    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 14:39:47.634919    6676 kapi.go:59] client config for multinode-025000: &rest.Config{Host:"https://172.23.198.154:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-025000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-025000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x288e1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0612 14:39:47.635693    6676 addons.go:234] Setting addon default-storageclass=true in "multinode-025000"
	I0612 14:39:47.635693    6676 host.go:66] Checking if "multinode-025000" exists ...
	I0612 14:39:47.637059    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 14:39:47.687350    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:39:47.687590    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:47.687590    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:47.687590    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:47.688353    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:39:47.688353    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:47.688353    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:47.688353    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:47 GMT
	I0612 14:39:47.688353    6676 round_trippers.go:580]     Audit-Id: 96018469-07ff-41e4-b3cb-4d507c75a3b4
	I0612 14:39:47.688353    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:47.692641    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:47.692641    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:47.692853    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"348","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0612 14:39:48.172992    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:39:48.173059    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:48.173126    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:48.173126    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:48.183553    6676 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0612 14:39:48.190054    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:48.190182    6676 round_trippers.go:580]     Audit-Id: 5b31e29d-c1ba-4057-b535-88b5b27b42e4
	I0612 14:39:48.190182    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:48.190182    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:48.190182    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:48.190182    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:48.190264    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:48 GMT
	I0612 14:39:48.190773    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"348","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0612 14:39:48.191554    6676 node_ready.go:53] node "multinode-025000" has status "Ready":"False"
	I0612 14:39:48.685859    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:39:48.685915    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:48.685951    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:48.685951    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:48.691905    6676 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 14:39:48.692008    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:48.692008    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:48.692072    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:48 GMT
	I0612 14:39:48.692128    6676 round_trippers.go:580]     Audit-Id: 03d4ad67-8081-4269-b74f-0162b5437324
	I0612 14:39:48.692128    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:48.692128    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:48.692128    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:48.694296    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"348","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0612 14:39:49.186918    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:39:49.187044    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:49.187083    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:49.187083    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:49.189698    6676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 14:39:49.189698    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:49.189698    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:49.189698    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:49.189698    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:49.190011    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:49.190011    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:49 GMT
	I0612 14:39:49.190011    6676 round_trippers.go:580]     Audit-Id: 1fc58aaf-75f8-4644-858e-75121307465a
	I0612 14:39:49.190456    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"348","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0612 14:39:49.680328    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:39:49.680577    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:49.680577    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:49.680577    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:49.680998    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:39:49.684837    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:49.684837    6676 round_trippers.go:580]     Audit-Id: 8e2ce3c8-bc93-4e4f-bab5-63bc6e35036b
	I0612 14:39:49.684837    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:49.684837    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:49.684837    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:49.684837    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:49.684837    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:49 GMT
	I0612 14:39:49.685268    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"348","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0612 14:39:49.928039    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:39:49.928039    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:39:49.928135    6676 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0612 14:39:49.928135    6676 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0612 14:39:49.928135    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 14:39:49.954689    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:39:49.954689    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:39:49.954819    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 14:39:50.175559    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:39:50.175596    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:50.175596    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:50.175596    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:50.176283    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:39:50.176283    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:50.176283    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:50 GMT
	I0612 14:39:50.176283    6676 round_trippers.go:580]     Audit-Id: 740005f2-6c42-4e10-8dd8-552ccef591e8
	I0612 14:39:50.176283    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:50.181078    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:50.181078    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:50.181078    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:50.181267    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"348","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0612 14:39:50.671879    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:39:50.672099    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:50.672099    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:50.672160    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:50.672403    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:39:50.675629    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:50.675703    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:50.675703    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:50.675703    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:50.675790    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:50.675790    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:50 GMT
	I0612 14:39:50.675790    6676 round_trippers.go:580]     Audit-Id: d1ac2eb7-e57a-4b12-ad90-a3527d65817d
	I0612 14:39:50.675996    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"348","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0612 14:39:50.676529    6676 node_ready.go:53] node "multinode-025000" has status "Ready":"False"
	I0612 14:39:51.184610    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:39:51.184720    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:51.184720    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:51.184720    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:51.190739    6676 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 14:39:51.190876    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:51.190876    6676 round_trippers.go:580]     Audit-Id: c009c86e-1a3e-412a-8903-10765c631dd6
	I0612 14:39:51.190876    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:51.190876    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:51.190876    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:51.190876    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:51.190876    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:51 GMT
	I0612 14:39:51.191687    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"348","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0612 14:39:51.687187    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:39:51.687462    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:51.687517    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:51.687517    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:51.876802    6676 round_trippers.go:574] Response Status: 200 OK in 189 milliseconds
	I0612 14:39:51.883639    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:51.883639    6676 round_trippers.go:580]     Audit-Id: c6c341aa-3bcc-4929-95d0-2ca108400e46
	I0612 14:39:51.883639    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:51.883639    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:51.883639    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:51.883639    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:51.883826    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:51 GMT
	I0612 14:39:51.884620    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"348","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0612 14:39:52.120197    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:39:52.133052    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:39:52.133052    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 14:39:52.181149    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:39:52.181149    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:52.181149    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:52.181149    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:52.184557    6676 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 14:39:52.184739    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:52.184739    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:52.184739    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:52.184739    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:52 GMT
	I0612 14:39:52.184739    6676 round_trippers.go:580]     Audit-Id: c5f979b4-af69-4ce0-b0ff-1cb78d8ab346
	I0612 14:39:52.184739    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:52.184739    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:52.185132    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"348","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0612 14:39:52.597934    6676 main.go:141] libmachine: [stdout =====>] : 172.23.198.154
	
	I0612 14:39:52.606404    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:39:52.606561    6676 sshutil.go:53] new ssh client: &{IP:172.23.198.154 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000\id_rsa Username:docker}
	I0612 14:39:52.681211    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:39:52.681211    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:52.681211    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:52.681211    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:52.681964    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:39:52.681964    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:52.681964    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:52.681964    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:52 GMT
	I0612 14:39:52.681964    6676 round_trippers.go:580]     Audit-Id: 68f38086-aa35-4a68-84aa-b75b4c3fbc0b
	I0612 14:39:52.681964    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:52.687624    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:52.687624    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:52.688027    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"348","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0612 14:39:52.688027    6676 node_ready.go:53] node "multinode-025000" has status "Ready":"False"
	I0612 14:39:52.765122    6676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0612 14:39:53.171968    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:39:53.171968    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:53.171968    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:53.171968    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:53.173941    6676 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 14:39:53.175046    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:53.175046    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:53 GMT
	I0612 14:39:53.175046    6676 round_trippers.go:580]     Audit-Id: 93069427-6cfb-48fb-8acc-9ad7da8caf5f
	I0612 14:39:53.175046    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:53.175046    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:53.175046    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:53.175121    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:53.175468    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"348","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0612 14:39:53.224100    6676 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0612 14:39:53.224228    6676 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0612 14:39:53.224228    6676 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0612 14:39:53.224228    6676 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0612 14:39:53.224294    6676 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0612 14:39:53.224294    6676 command_runner.go:130] > pod/storage-provisioner created
	I0612 14:39:53.675547    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:39:53.675547    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:53.675547    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:53.675547    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:53.676285    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:39:53.676285    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:53.676285    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:53.680628    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:53.680628    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:53.680628    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:53 GMT
	I0612 14:39:53.680628    6676 round_trippers.go:580]     Audit-Id: b59294e7-4661-4851-a765-a35988b6f0c3
	I0612 14:39:53.680628    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:53.681103    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"348","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0612 14:39:54.187130    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:39:54.187130    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:54.187130    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:54.187130    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:54.190930    6676 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 14:39:54.191271    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:54.191271    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:54.191271    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:54 GMT
	I0612 14:39:54.191271    6676 round_trippers.go:580]     Audit-Id: 760969e9-d8a1-4f1c-8b94-345cebe60502
	I0612 14:39:54.191271    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:54.191271    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:54.191271    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:54.191812    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"348","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0612 14:39:54.636291    6676 main.go:141] libmachine: [stdout =====>] : 172.23.198.154
	
	I0612 14:39:54.636291    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:39:54.648285    6676 sshutil.go:53] new ssh client: &{IP:172.23.198.154 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000\id_rsa Username:docker}
	I0612 14:39:54.677027    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:39:54.677027    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:54.677027    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:54.677027    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:54.680148    6676 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 14:39:54.680148    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:54.680148    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:54.681201    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:54.681201    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:54 GMT
	I0612 14:39:54.681201    6676 round_trippers.go:580]     Audit-Id: 1f4bd41d-d363-450c-9d58-fc6ea2dcd4b7
	I0612 14:39:54.681201    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:54.681201    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:54.681547    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"348","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0612 14:39:54.784351    6676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0612 14:39:54.922804    6676 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0612 14:39:54.925449    6676 round_trippers.go:463] GET https://172.23.198.154:8443/apis/storage.k8s.io/v1/storageclasses
	I0612 14:39:54.925449    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:54.925449    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:54.925449    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:54.927234    6676 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 14:39:54.927234    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:54.928332    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:54.928332    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:54.928332    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:54.928332    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:54.928332    6676 round_trippers.go:580]     Content-Length: 1273
	I0612 14:39:54.928332    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:54 GMT
	I0612 14:39:54.928332    6676 round_trippers.go:580]     Audit-Id: f0e36305-6281-4eca-869f-f736992839eb
	I0612 14:39:54.928408    6676 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"433"},"items":[{"metadata":{"name":"standard","uid":"64f66597-1d17-4c6a-b052-9bb5158cde48","resourceVersion":"433","creationTimestamp":"2024-06-12T21:39:54Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-12T21:39:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0612 14:39:54.928932    6676 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"64f66597-1d17-4c6a-b052-9bb5158cde48","resourceVersion":"433","creationTimestamp":"2024-06-12T21:39:54Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-12T21:39:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0612 14:39:54.928989    6676 round_trippers.go:463] PUT https://172.23.198.154:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0612 14:39:54.928989    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:54.929080    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:54.929080    6676 round_trippers.go:473]     Content-Type: application/json
	I0612 14:39:54.929080    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:54.930057    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:39:54.930057    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:54.930057    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:54.932975    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:54.932975    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:54.932975    6676 round_trippers.go:580]     Content-Length: 1220
	I0612 14:39:54.932975    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:54 GMT
	I0612 14:39:54.932975    6676 round_trippers.go:580]     Audit-Id: 4f57e471-9de1-4a32-98c8-3b6fb7b0731b
	I0612 14:39:54.932975    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:54.933052    6676 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"64f66597-1d17-4c6a-b052-9bb5158cde48","resourceVersion":"433","creationTimestamp":"2024-06-12T21:39:54Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-12T21:39:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0612 14:39:54.936283    6676 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0612 14:39:54.940846    6676 addons.go:510] duration metric: took 9.6311059s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0612 14:39:55.175142    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:39:55.175142    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:55.175238    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:55.175238    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:55.175539    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:39:55.179039    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:55.179039    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:55.179039    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:55.179039    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:55 GMT
	I0612 14:39:55.179039    6676 round_trippers.go:580]     Audit-Id: c7086754-9474-4bca-ac27-3d8470b3fe0f
	I0612 14:39:55.179039    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:55.179039    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:55.179286    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"348","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0612 14:39:55.179475    6676 node_ready.go:53] node "multinode-025000" has status "Ready":"False"
	I0612 14:39:55.686624    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:39:55.686810    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:55.686810    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:55.686810    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:55.692817    6676 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 14:39:55.692817    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:55.692817    6676 round_trippers.go:580]     Audit-Id: afa81022-f245-49c8-be9b-5d492e45ea1e
	I0612 14:39:55.692817    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:55.692817    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:55.692817    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:55.692817    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:55.692817    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:55 GMT
	I0612 14:39:55.693355    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"348","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0612 14:39:56.180114    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:39:56.180181    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:56.180181    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:56.180181    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:56.180535    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:39:56.180535    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:56.180535    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:56.180535    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:56 GMT
	I0612 14:39:56.180535    6676 round_trippers.go:580]     Audit-Id: 327f1e32-5e33-4c52-84d9-a5fb8fc0b889
	I0612 14:39:56.180535    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:56.180535    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:56.184489    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:56.184941    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"348","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0612 14:39:56.672609    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:39:56.672609    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:56.672609    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:56.672609    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:56.673259    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:39:56.673259    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:56.673259    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:56.673259    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:56.673259    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:56.673259    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:56.673259    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:56 GMT
	I0612 14:39:56.673259    6676 round_trippers.go:580]     Audit-Id: 7b8d8f15-848f-4ff7-bc8b-48d43544baa2
	I0612 14:39:56.676752    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"436","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0612 14:39:56.677325    6676 node_ready.go:49] node "multinode-025000" has status "Ready":"True"
	I0612 14:39:56.677512    6676 node_ready.go:38] duration metric: took 10.5057415s for node "multinode-025000" to be "Ready" ...
	I0612 14:39:56.677512    6676 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 14:39:56.677713    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/namespaces/kube-system/pods
	I0612 14:39:56.677713    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:56.677799    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:56.677799    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:56.678097    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:39:56.678097    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:56.678097    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:56.678097    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:56.678097    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:56.678097    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:56.678097    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:56 GMT
	I0612 14:39:56.682669    6676 round_trippers.go:580]     Audit-Id: ee333a1d-3c83-4fdd-be37-cca0fdd0c1a9
	I0612 14:39:56.683776    6676 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"442"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"440","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56385 chars]
	I0612 14:39:56.688986    6676 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace to be "Ready" ...
	I0612 14:39:56.688986    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 14:39:56.688986    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:56.688986    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:56.688986    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:56.689522    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:39:56.689522    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:56.689522    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:56.689522    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:56.689522    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:56.689522    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:56.689522    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:56 GMT
	I0612 14:39:56.689522    6676 round_trippers.go:580]     Audit-Id: 93948d10-f413-46df-bb61-005f9452bb9d
	I0612 14:39:56.692354    6676 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"440","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0612 14:39:56.693077    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:39:56.693077    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:56.693109    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:56.693109    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:56.695699    6676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 14:39:56.695699    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:56.695699    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:56.695699    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:56 GMT
	I0612 14:39:56.695699    6676 round_trippers.go:580]     Audit-Id: b4b708e8-e2a2-43d4-981e-7577d1d3f8c5
	I0612 14:39:56.695699    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:56.695699    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:56.695699    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:56.696576    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"436","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0612 14:39:57.196087    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 14:39:57.196168    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:57.196168    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:57.196202    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:57.196681    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:39:57.196681    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:57.196681    6676 round_trippers.go:580]     Audit-Id: 5d3572c6-c811-4537-941a-518a94337349
	I0612 14:39:57.196681    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:57.196681    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:57.196681    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:57.196681    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:57.196681    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:57 GMT
	I0612 14:39:57.200380    6676 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"440","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0612 14:39:57.201224    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:39:57.201224    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:57.201224    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:57.201224    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:57.204103    6676 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 14:39:57.204159    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:57.204159    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:57.204159    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:57.204226    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:57 GMT
	I0612 14:39:57.204226    6676 round_trippers.go:580]     Audit-Id: c78523d9-67c8-4a3a-8f0c-e77563fc64a7
	I0612 14:39:57.204226    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:57.204226    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:57.204528    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"436","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0612 14:39:57.695887    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 14:39:57.695887    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:57.695887    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:57.695887    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:57.697265    6676 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 14:39:57.697265    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:57.697265    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:57.697265    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:57.697265    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:57 GMT
	I0612 14:39:57.697265    6676 round_trippers.go:580]     Audit-Id: 8e3d5326-018f-4ea9-bf36-b1cea72d5b0c
	I0612 14:39:57.697265    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:57.697265    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:57.697265    6676 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"440","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0612 14:39:57.701685    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:39:57.701773    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:57.701773    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:57.701773    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:57.709763    6676 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 14:39:57.709763    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:57.709763    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:57.709763    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:57.709763    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:57.709763    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:57.709763    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:57 GMT
	I0612 14:39:57.709763    6676 round_trippers.go:580]     Audit-Id: ad985f1e-8bb1-4bfa-8b8f-2a4f73aa8e93
	I0612 14:39:57.710644    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"436","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0612 14:39:58.206900    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 14:39:58.206900    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:58.206900    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:58.206900    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:58.211031    6676 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 14:39:58.211792    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:58.211792    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:58.211792    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:58.211792    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:58.211792    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:58 GMT
	I0612 14:39:58.212021    6676 round_trippers.go:580]     Audit-Id: de62b74e-dec1-494c-a1fb-aaa81c57d994
	I0612 14:39:58.212021    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:58.212021    6676 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"453","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6809 chars]
	I0612 14:39:58.212632    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:39:58.213168    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:58.213168    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:58.213168    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:58.213414    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:39:58.213414    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:58.213414    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:58.213414    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:58.213414    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:58.213414    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:58.213414    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:58 GMT
	I0612 14:39:58.213414    6676 round_trippers.go:580]     Audit-Id: ea3ff1a3-f661-442c-b987-641edf29b59f
	I0612 14:39:58.216756    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"436","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0612 14:39:58.692165    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 14:39:58.692165    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:58.692165    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:58.692165    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:58.692918    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:39:58.692918    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:58.692918    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:58.692918    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:58 GMT
	I0612 14:39:58.692918    6676 round_trippers.go:580]     Audit-Id: 466c1cc1-9c77-4e85-be72-1517b3461c74
	I0612 14:39:58.696486    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:58.696486    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:58.696486    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:58.696754    6676 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"453","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6809 chars]
	I0612 14:39:58.697382    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:39:58.697993    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:58.697993    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:58.697993    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:58.698263    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:39:58.698263    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:58.698263    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:58.698263    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:58.698263    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:58.698263    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:58 GMT
	I0612 14:39:58.698263    6676 round_trippers.go:580]     Audit-Id: 39da2705-ac1e-4594-a2ff-3d05f66b1896
	I0612 14:39:58.698263    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:58.701868    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"436","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0612 14:39:58.702359    6676 pod_ready.go:102] pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace has status "Ready":"False"
	I0612 14:39:59.195924    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 14:39:59.195924    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:59.195924    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:59.195924    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:59.196498    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:39:59.199893    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:59.199893    6676 round_trippers.go:580]     Audit-Id: 12ae25dc-90db-4388-ae7a-be01f4dc0148
	I0612 14:39:59.199893    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:59.199893    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:59.199893    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:59.199893    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:59.199893    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:59 GMT
	I0612 14:39:59.200114    6676 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"456","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0612 14:39:59.201553    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:39:59.201553    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:59.201553    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:59.201634    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:59.204337    6676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 14:39:59.204337    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:59.204337    6676 round_trippers.go:580]     Audit-Id: e178acd6-07da-4f39-bdd9-97350822f8de
	I0612 14:39:59.204337    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:59.204337    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:59.204337    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:59.204337    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:59.204337    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:59 GMT
	I0612 14:39:59.205027    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"436","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0612 14:39:59.205415    6676 pod_ready.go:92] pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace has status "Ready":"True"
	I0612 14:39:59.205505    6676 pod_ready.go:81] duration metric: took 2.5165103s for pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace to be "Ready" ...
	I0612 14:39:59.205505    6676 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 14:39:59.205712    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-025000
	I0612 14:39:59.205712    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:59.205712    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:59.205712    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:59.209128    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:39:59.209128    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:59.209128    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:59.209128    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:59 GMT
	I0612 14:39:59.209128    6676 round_trippers.go:580]     Audit-Id: 73182c70-c843-4292-98a1-1685e2a59b2d
	I0612 14:39:59.209128    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:59.209128    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:59.209128    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:59.209457    6676 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-025000","namespace":"kube-system","uid":"630bafc4-4576-4974-b638-7ab52dcfec18","resourceVersion":"416","creationTimestamp":"2024-06-12T21:39:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.198.154:2379","kubernetes.io/config.hash":"04dcbc8e258f964f689941b6844769d9","kubernetes.io/config.mirror":"04dcbc8e258f964f689941b6844769d9","kubernetes.io/config.seen":"2024-06-12T21:39:23.999683415Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0612 14:39:59.209457    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:39:59.209457    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:59.209986    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:59.209986    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:59.210307    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:39:59.210307    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:59.210307    6676 round_trippers.go:580]     Audit-Id: aa136081-96e4-45ea-8454-7be691b44485
	I0612 14:39:59.210307    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:59.210307    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:59.210307    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:59.210307    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:59.210307    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:59 GMT
	I0612 14:39:59.213342    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"436","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0612 14:39:59.213816    6676 pod_ready.go:92] pod "etcd-multinode-025000" in "kube-system" namespace has status "Ready":"True"
	I0612 14:39:59.213880    6676 pod_ready.go:81] duration metric: took 8.3752ms for pod "etcd-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 14:39:59.213880    6676 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 14:39:59.213949    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-025000
	I0612 14:39:59.214047    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:59.214047    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:59.214047    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:59.214787    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:39:59.214787    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:59.214787    6676 round_trippers.go:580]     Audit-Id: 16c33e10-c710-4bd2-ae4d-6d99dc654ebe
	I0612 14:39:59.214787    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:59.217417    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:59.217417    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:59.217417    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:59.217417    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:59 GMT
	I0612 14:39:59.217615    6676 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-025000","namespace":"kube-system","uid":"6b429685-b322-4b00-83fc-743786ff40e1","resourceVersion":"418","creationTimestamp":"2024-06-12T21:39:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.198.154:8443","kubernetes.io/config.hash":"610414aa8160848c0b6b79ea0a700b83","kubernetes.io/config.mirror":"610414aa8160848c0b6b79ea0a700b83","kubernetes.io/config.seen":"2024-06-12T21:39:31.214464964Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0612 14:39:59.218168    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:39:59.218168    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:59.218168    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:59.218168    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:59.220846    6676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 14:39:59.220846    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:59.220958    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:59.220958    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:59 GMT
	I0612 14:39:59.220958    6676 round_trippers.go:580]     Audit-Id: 9ec2f72b-c4dc-45f6-9105-3e37055e7eb6
	I0612 14:39:59.220958    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:59.220958    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:59.220958    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:59.221139    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"436","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0612 14:39:59.221608    6676 pod_ready.go:92] pod "kube-apiserver-multinode-025000" in "kube-system" namespace has status "Ready":"True"
	I0612 14:39:59.221677    6676 pod_ready.go:81] duration metric: took 7.7972ms for pod "kube-apiserver-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 14:39:59.221752    6676 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 14:39:59.221874    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-025000
	I0612 14:39:59.221913    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:59.221913    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:59.221913    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:59.226214    6676 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 14:39:59.226243    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:59.226243    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:59.226243    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:59.226243    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:59.226243    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:59 GMT
	I0612 14:39:59.226243    6676 round_trippers.go:580]     Audit-Id: e7ea6435-95d9-42e8-98ee-b5b90dbc1d10
	I0612 14:39:59.226243    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:59.226243    6676 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-025000","namespace":"kube-system","uid":"68c9aa4f-49ee-439c-ad51-7943e65c0085","resourceVersion":"417","creationTimestamp":"2024-06-12T21:39:30Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"88de11d8b1aaec126153d44e87c4b5dd","kubernetes.io/config.mirror":"88de11d8b1aaec126153d44e87c4b5dd","kubernetes.io/config.seen":"2024-06-12T21:39:23.999674614Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0612 14:39:59.226885    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:39:59.226885    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:59.226885    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:59.226885    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:59.237117    6676 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0612 14:39:59.237863    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:59.237863    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:59 GMT
	I0612 14:39:59.237863    6676 round_trippers.go:580]     Audit-Id: 5eea0417-7943-4cb4-8f9c-3222c2beb1a3
	I0612 14:39:59.237863    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:59.237863    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:59.237929    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:59.237929    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:59.238050    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"436","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0612 14:39:59.238371    6676 pod_ready.go:92] pod "kube-controller-manager-multinode-025000" in "kube-system" namespace has status "Ready":"True"
	I0612 14:39:59.238371    6676 pod_ready.go:81] duration metric: took 16.6193ms for pod "kube-controller-manager-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 14:39:59.238371    6676 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-47lr8" in "kube-system" namespace to be "Ready" ...
	I0612 14:39:59.238371    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/namespaces/kube-system/pods/kube-proxy-47lr8
	I0612 14:39:59.238371    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:59.238371    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:59.238371    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:59.240611    6676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 14:39:59.240611    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:59.240611    6676 round_trippers.go:580]     Audit-Id: 51386276-9a58-4993-9202-fc94f30afee0
	I0612 14:39:59.240611    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:59.240611    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:59.240611    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:59.241872    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:59.241872    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:59 GMT
	I0612 14:39:59.242201    6676 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-47lr8","generateName":"kube-proxy-","namespace":"kube-system","uid":"10b24fa7-8eea-4fbb-ab18-404e853aa7ab","resourceVersion":"411","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b44c21bc-e2cc-415b-bc2f-616adabe0681","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b44c21bc-e2cc-415b-bc2f-616adabe0681\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0612 14:39:59.242348    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:39:59.242348    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:59.242348    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:59.242348    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:59.247880    6676 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 14:39:59.247923    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:59.247923    6676 round_trippers.go:580]     Audit-Id: bd74af26-c07a-4f23-9606-559503a6d273
	I0612 14:39:59.247923    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:59.247923    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:59.247923    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:59.247923    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:59.247968    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:59 GMT
	I0612 14:39:59.248262    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"436","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0612 14:39:59.248874    6676 pod_ready.go:92] pod "kube-proxy-47lr8" in "kube-system" namespace has status "Ready":"True"
	I0612 14:39:59.248874    6676 pod_ready.go:81] duration metric: took 10.5026ms for pod "kube-proxy-47lr8" in "kube-system" namespace to be "Ready" ...
	I0612 14:39:59.248874    6676 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 14:39:59.402501    6676 request.go:629] Waited for 153.4197ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.198.154:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-025000
	I0612 14:39:59.402718    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-025000
	I0612 14:39:59.402805    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:59.402805    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:59.402849    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:59.403603    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:39:59.407168    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:59.407168    6676 round_trippers.go:580]     Audit-Id: afc0d2c2-8228-43d0-a310-793d255cbcf8
	I0612 14:39:59.407168    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:59.407168    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:59.407168    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:59.407168    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:59.407168    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:59 GMT
	I0612 14:39:59.407372    6676 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-025000","namespace":"kube-system","uid":"83b272cb-1286-47d8-bcb1-a66056dff2a5","resourceVersion":"415","creationTimestamp":"2024-06-12T21:39:31Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"de62e7fd7d0feea82620e745032c1a67","kubernetes.io/config.mirror":"de62e7fd7d0feea82620e745032c1a67","kubernetes.io/config.seen":"2024-06-12T21:39:31.214466565Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0612 14:39:59.596332    6676 request.go:629] Waited for 188.3097ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:39:59.596627    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:39:59.596627    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:59.596627    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:59.596627    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:59.596988    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:39:59.596988    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:59.600448    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:59.600448    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:59.600448    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:59.600448    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:59 GMT
	I0612 14:39:59.600448    6676 round_trippers.go:580]     Audit-Id: 41667170-22ed-471f-967d-7e53b005f2f9
	I0612 14:39:59.600448    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:59.600738    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"436","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0612 14:39:59.601828    6676 pod_ready.go:92] pod "kube-scheduler-multinode-025000" in "kube-system" namespace has status "Ready":"True"
	I0612 14:39:59.601886    6676 pod_ready.go:81] duration metric: took 353.0112ms for pod "kube-scheduler-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 14:39:59.601886    6676 pod_ready.go:38] duration metric: took 2.9243649s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 14:39:59.601955    6676 api_server.go:52] waiting for apiserver process to appear ...
	I0612 14:39:59.612041    6676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 14:39:59.643704    6676 command_runner.go:130] > 1956
	I0612 14:39:59.643803    6676 api_server.go:72] duration metric: took 14.3340478s to wait for apiserver process to appear ...
	I0612 14:39:59.643803    6676 api_server.go:88] waiting for apiserver healthz status ...
	I0612 14:39:59.643913    6676 api_server.go:253] Checking apiserver healthz at https://172.23.198.154:8443/healthz ...
	I0612 14:39:59.652037    6676 api_server.go:279] https://172.23.198.154:8443/healthz returned 200:
	ok
	I0612 14:39:59.653656    6676 round_trippers.go:463] GET https://172.23.198.154:8443/version
	I0612 14:39:59.653656    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:59.653720    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:59.653720    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:59.655381    6676 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 14:39:59.655381    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:59.655381    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:59 GMT
	I0612 14:39:59.655381    6676 round_trippers.go:580]     Audit-Id: 228a5f23-678e-4e76-9ed2-f49b0192ae39
	I0612 14:39:59.655381    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:59.655381    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:59.655381    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:59.655381    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:59.655381    6676 round_trippers.go:580]     Content-Length: 263
	I0612 14:39:59.655381    6676 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0612 14:39:59.655914    6676 api_server.go:141] control plane version: v1.30.1
	I0612 14:39:59.655914    6676 api_server.go:131] duration metric: took 12.1105ms to wait for apiserver health ...
	I0612 14:39:59.656085    6676 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 14:39:59.809576    6676 request.go:629] Waited for 153.1885ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.198.154:8443/api/v1/namespaces/kube-system/pods
	I0612 14:39:59.809751    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/namespaces/kube-system/pods
	I0612 14:39:59.809794    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:59.809794    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:59.809794    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:59.810566    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:39:59.810566    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:59.810566    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:59.810566    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:39:59.810566    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:39:59.810566    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:59 GMT
	I0612 14:39:59.810566    6676 round_trippers.go:580]     Audit-Id: 61189d12-cc46-43b0-9875-cfd9a4ea8c08
	I0612 14:39:59.810566    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:39:59.817685    6676 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"461"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"456","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0612 14:39:59.820572    6676 system_pods.go:59] 8 kube-system pods found
	I0612 14:39:59.820572    6676 system_pods.go:61] "coredns-7db6d8ff4d-vgcxw" [c5bd143a-d39e-46af-9308-0a97bb45729c] Running
	I0612 14:39:59.820646    6676 system_pods.go:61] "etcd-multinode-025000" [630bafc4-4576-4974-b638-7ab52dcfec18] Running
	I0612 14:39:59.820646    6676 system_pods.go:61] "kindnet-bqlg8" [1f004a05-3f5f-444b-9ac0-88f0e23da904] Running
	I0612 14:39:59.820646    6676 system_pods.go:61] "kube-apiserver-multinode-025000" [6b429685-b322-4b00-83fc-743786ff40e1] Running
	I0612 14:39:59.820646    6676 system_pods.go:61] "kube-controller-manager-multinode-025000" [68c9aa4f-49ee-439c-ad51-7943e65c0085] Running
	I0612 14:39:59.820646    6676 system_pods.go:61] "kube-proxy-47lr8" [10b24fa7-8eea-4fbb-ab18-404e853aa7ab] Running
	I0612 14:39:59.820646    6676 system_pods.go:61] "kube-scheduler-multinode-025000" [83b272cb-1286-47d8-bcb1-a66056dff2a5] Running
	I0612 14:39:59.820646    6676 system_pods.go:61] "storage-provisioner" [d20f7489-1aa1-44b8-9221-4d1849884be4] Running
	I0612 14:39:59.820646    6676 system_pods.go:74] duration metric: took 164.5606ms to wait for pod list to return data ...
	I0612 14:39:59.820861    6676 default_sa.go:34] waiting for default service account to be created ...
	I0612 14:39:59.996215    6676 request.go:629] Waited for 175.0014ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.198.154:8443/api/v1/namespaces/default/serviceaccounts
	I0612 14:39:59.996215    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/namespaces/default/serviceaccounts
	I0612 14:39:59.996215    6676 round_trippers.go:469] Request Headers:
	I0612 14:39:59.996215    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:39:59.996215    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:39:59.997028    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:39:59.997028    6676 round_trippers.go:577] Response Headers:
	I0612 14:39:59.997028    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:39:59.997028    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:40:00.000826    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:40:00.000826    6676 round_trippers.go:580]     Content-Length: 261
	I0612 14:40:00.000826    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:39:59 GMT
	I0612 14:40:00.000826    6676 round_trippers.go:580]     Audit-Id: b69cded8-06d0-44cb-9544-58bce4b0083b
	I0612 14:40:00.000826    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:40:00.000826    6676 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"462"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"876e1679-16ec-44bf-9460-cce6ea3acbf0","resourceVersion":"355","creationTimestamp":"2024-06-12T21:39:45Z"}}]}
	I0612 14:40:00.001233    6676 default_sa.go:45] found service account: "default"
	I0612 14:40:00.001323    6676 default_sa.go:55] duration metric: took 180.4055ms for default service account to be created ...
	I0612 14:40:00.001323    6676 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 14:40:00.200137    6676 request.go:629] Waited for 198.7419ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.198.154:8443/api/v1/namespaces/kube-system/pods
	I0612 14:40:00.200137    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/namespaces/kube-system/pods
	I0612 14:40:00.200368    6676 round_trippers.go:469] Request Headers:
	I0612 14:40:00.200368    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:40:00.200368    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:40:00.201107    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:40:00.201107    6676 round_trippers.go:577] Response Headers:
	I0612 14:40:00.201107    6676 round_trippers.go:580]     Audit-Id: 3356d084-15d4-429c-8cd6-9b95a66014aa
	I0612 14:40:00.201107    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:40:00.201107    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:40:00.201107    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:40:00.205526    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:40:00.205526    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:40:00 GMT
	I0612 14:40:00.207250    6676 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"462"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"456","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0612 14:40:00.210284    6676 system_pods.go:86] 8 kube-system pods found
	I0612 14:40:00.210284    6676 system_pods.go:89] "coredns-7db6d8ff4d-vgcxw" [c5bd143a-d39e-46af-9308-0a97bb45729c] Running
	I0612 14:40:00.210367    6676 system_pods.go:89] "etcd-multinode-025000" [630bafc4-4576-4974-b638-7ab52dcfec18] Running
	I0612 14:40:00.210367    6676 system_pods.go:89] "kindnet-bqlg8" [1f004a05-3f5f-444b-9ac0-88f0e23da904] Running
	I0612 14:40:00.210367    6676 system_pods.go:89] "kube-apiserver-multinode-025000" [6b429685-b322-4b00-83fc-743786ff40e1] Running
	I0612 14:40:00.210367    6676 system_pods.go:89] "kube-controller-manager-multinode-025000" [68c9aa4f-49ee-439c-ad51-7943e65c0085] Running
	I0612 14:40:00.210367    6676 system_pods.go:89] "kube-proxy-47lr8" [10b24fa7-8eea-4fbb-ab18-404e853aa7ab] Running
	I0612 14:40:00.210367    6676 system_pods.go:89] "kube-scheduler-multinode-025000" [83b272cb-1286-47d8-bcb1-a66056dff2a5] Running
	I0612 14:40:00.210367    6676 system_pods.go:89] "storage-provisioner" [d20f7489-1aa1-44b8-9221-4d1849884be4] Running
	I0612 14:40:00.210367    6676 system_pods.go:126] duration metric: took 209.0434ms to wait for k8s-apps to be running ...
	I0612 14:40:00.210367    6676 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 14:40:00.220849    6676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 14:40:00.248938    6676 system_svc.go:56] duration metric: took 38.571ms WaitForService to wait for kubelet
	I0612 14:40:00.248938    6676 kubeadm.go:576] duration metric: took 14.9391803s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 14:40:00.249118    6676 node_conditions.go:102] verifying NodePressure condition ...
	I0612 14:40:00.405755    6676 request.go:629] Waited for 156.3437ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.198.154:8443/api/v1/nodes
	I0612 14:40:00.406107    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes
	I0612 14:40:00.406107    6676 round_trippers.go:469] Request Headers:
	I0612 14:40:00.406107    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:40:00.406107    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:40:00.406509    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:40:00.406509    6676 round_trippers.go:577] Response Headers:
	I0612 14:40:00.409773    6676 round_trippers.go:580]     Audit-Id: 1e0d74e9-4613-4b69-94c9-4f7d7cd4a2ba
	I0612 14:40:00.409773    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:40:00.409773    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:40:00.409773    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:40:00.409773    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:40:00.409773    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:40:00 GMT
	I0612 14:40:00.409959    6676 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"462"},"items":[{"metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"436","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4837 chars]
	I0612 14:40:00.410914    6676 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 14:40:00.410914    6676 node_conditions.go:123] node cpu capacity is 2
	I0612 14:40:00.410914    6676 node_conditions.go:105] duration metric: took 161.796ms to run NodePressure ...
	I0612 14:40:00.410914    6676 start.go:240] waiting for startup goroutines ...
	I0612 14:40:00.410914    6676 start.go:245] waiting for cluster config update ...
	I0612 14:40:00.410914    6676 start.go:254] writing updated cluster config ...
	I0612 14:40:00.415883    6676 out.go:177] 
	I0612 14:40:00.419655    6676 config.go:182] Loaded profile config "ha-957600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 14:40:00.428027    6676 config.go:182] Loaded profile config "multinode-025000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 14:40:00.428488    6676 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\config.json ...
	I0612 14:40:00.432931    6676 out.go:177] * Starting "multinode-025000-m02" worker node in "multinode-025000" cluster
	I0612 14:40:00.437851    6676 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0612 14:40:00.437851    6676 cache.go:56] Caching tarball of preloaded images
	I0612 14:40:00.438104    6676 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0612 14:40:00.438441    6676 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0612 14:40:00.438441    6676 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\config.json ...
	I0612 14:40:00.439234    6676 start.go:360] acquireMachinesLock for multinode-025000-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 14:40:00.439234    6676 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-025000-m02"
	I0612 14:40:00.442152    6676 start.go:93] Provisioning new machine with config: &{Name:multinode-025000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-025000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.198.154 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0612 14:40:00.442152    6676 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0612 14:40:00.443558    6676 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0612 14:40:00.445771    6676 start.go:159] libmachine.API.Create for "multinode-025000" (driver="hyperv")
	I0612 14:40:00.445812    6676 client.go:168] LocalClient.Create starting
	I0612 14:40:00.445973    6676 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0612 14:40:00.445973    6676 main.go:141] libmachine: Decoding PEM data...
	I0612 14:40:00.445973    6676 main.go:141] libmachine: Parsing certificate...
	I0612 14:40:00.446703    6676 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0612 14:40:00.446865    6676 main.go:141] libmachine: Decoding PEM data...
	I0612 14:40:00.446865    6676 main.go:141] libmachine: Parsing certificate...
	I0612 14:40:00.447058    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0612 14:40:02.286858    6676 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0612 14:40:02.287210    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:40:02.287362    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0612 14:40:03.937609    6676 main.go:141] libmachine: [stdout =====>] : False
	
	I0612 14:40:03.937609    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:40:03.937721    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0612 14:40:05.364155    6676 main.go:141] libmachine: [stdout =====>] : True
	
	I0612 14:40:05.364155    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:40:05.364519    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0612 14:40:08.933532    6676 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0612 14:40:08.941750    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:40:08.945210    6676 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1718047936-19044-amd64.iso...
	I0612 14:40:09.442913    6676 main.go:141] libmachine: Creating SSH key...
	I0612 14:40:09.525795    6676 main.go:141] libmachine: Creating VM...
	I0612 14:40:09.525795    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0612 14:40:12.390429    6676 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0612 14:40:12.390429    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:40:12.390429    6676 main.go:141] libmachine: Using switch "Default Switch"
	I0612 14:40:12.402311    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0612 14:40:14.121679    6676 main.go:141] libmachine: [stdout =====>] : True
	
	I0612 14:40:14.121878    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:40:14.121878    6676 main.go:141] libmachine: Creating VHD
	I0612 14:40:14.121963    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0612 14:40:17.848906    6676 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : E6BDF54A-15D9-494E-9512-89AFF89BEC93
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0612 14:40:17.848906    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:40:17.861353    6676 main.go:141] libmachine: Writing magic tar header
	I0612 14:40:17.861353    6676 main.go:141] libmachine: Writing SSH key tar header
	I0612 14:40:17.861933    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0612 14:40:20.981695    6676 main.go:141] libmachine: [stdout =====>] : 
	I0612 14:40:20.993856    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:40:20.993856    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000-m02\disk.vhd' -SizeBytes 20000MB
	I0612 14:40:23.506949    6676 main.go:141] libmachine: [stdout =====>] : 
	I0612 14:40:23.518154    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:40:23.518271    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-025000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0612 14:40:27.093663    6676 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-025000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0612 14:40:27.093663    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:40:27.093755    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-025000-m02 -DynamicMemoryEnabled $false
	I0612 14:40:29.347834    6676 main.go:141] libmachine: [stdout =====>] : 
	I0612 14:40:29.359167    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:40:29.359167    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-025000-m02 -Count 2
	I0612 14:40:31.512304    6676 main.go:141] libmachine: [stdout =====>] : 
	I0612 14:40:31.512504    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:40:31.512593    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-025000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000-m02\boot2docker.iso'
	I0612 14:40:34.086200    6676 main.go:141] libmachine: [stdout =====>] : 
	I0612 14:40:34.087449    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:40:34.087543    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-025000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000-m02\disk.vhd'
	I0612 14:40:36.661351    6676 main.go:141] libmachine: [stdout =====>] : 
	I0612 14:40:36.661351    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:40:36.661351    6676 main.go:141] libmachine: Starting VM...
	I0612 14:40:36.671994    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-025000-m02
	I0612 14:40:39.740514    6676 main.go:141] libmachine: [stdout =====>] : 
	I0612 14:40:39.745047    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:40:39.745047    6676 main.go:141] libmachine: Waiting for host to start...
	I0612 14:40:39.745187    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 14:40:42.001624    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:40:42.001624    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:40:42.001624    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 14:40:44.499372    6676 main.go:141] libmachine: [stdout =====>] : 
	I0612 14:40:44.503031    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:40:45.521252    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 14:40:47.721728    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:40:47.721728    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:40:47.729772    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 14:40:50.239267    6676 main.go:141] libmachine: [stdout =====>] : 
	I0612 14:40:50.239267    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:40:51.257988    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 14:40:53.411301    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:40:53.411301    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:40:53.415604    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 14:40:55.882300    6676 main.go:141] libmachine: [stdout =====>] : 
	I0612 14:40:55.887074    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:40:56.894076    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 14:40:59.072650    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:40:59.072650    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:40:59.072650    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 14:41:01.585583    6676 main.go:141] libmachine: [stdout =====>] : 
	I0612 14:41:01.585583    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:41:02.612282    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 14:41:04.867762    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:41:04.880396    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:41:04.880396    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 14:41:07.458511    6676 main.go:141] libmachine: [stdout =====>] : 172.23.196.105
	
	I0612 14:41:07.458511    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:41:07.458511    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 14:41:09.560840    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:41:09.573333    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:41:09.573747    6676 machine.go:94] provisionDockerMachine start ...
	I0612 14:41:09.573747    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 14:41:11.715091    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:41:11.726513    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:41:11.726615    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 14:41:14.247530    6676 main.go:141] libmachine: [stdout =====>] : 172.23.196.105
	
	I0612 14:41:14.258249    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:41:14.264794    6676 main.go:141] libmachine: Using SSH client type: native
	I0612 14:41:14.265441    6676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.196.105 22 <nil> <nil>}
	I0612 14:41:14.265441    6676 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 14:41:14.400324    6676 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 14:41:14.400324    6676 buildroot.go:166] provisioning hostname "multinode-025000-m02"
	I0612 14:41:14.400324    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 14:41:16.507003    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:41:16.507003    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:41:16.507003    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 14:41:19.024907    6676 main.go:141] libmachine: [stdout =====>] : 172.23.196.105
	
	I0612 14:41:19.036149    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:41:19.042430    6676 main.go:141] libmachine: Using SSH client type: native
	I0612 14:41:19.042657    6676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.196.105 22 <nil> <nil>}
	I0612 14:41:19.042657    6676 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-025000-m02 && echo "multinode-025000-m02" | sudo tee /etc/hostname
	I0612 14:41:19.204876    6676 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-025000-m02
	
	I0612 14:41:19.204876    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 14:41:21.301495    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:41:21.312734    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:41:21.312734    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 14:41:23.790208    6676 main.go:141] libmachine: [stdout =====>] : 172.23.196.105
	
	I0612 14:41:23.801055    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:41:23.806067    6676 main.go:141] libmachine: Using SSH client type: native
	I0612 14:41:23.806774    6676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.196.105 22 <nil> <nil>}
	I0612 14:41:23.807299    6676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-025000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-025000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-025000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 14:41:23.955942    6676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 14:41:23.956033    6676 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0612 14:41:23.956072    6676 buildroot.go:174] setting up certificates
	I0612 14:41:23.956072    6676 provision.go:84] configureAuth start
	I0612 14:41:23.956162    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 14:41:26.026255    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:41:26.029004    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:41:26.029181    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 14:41:28.536796    6676 main.go:141] libmachine: [stdout =====>] : 172.23.196.105
	
	I0612 14:41:28.536796    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:41:28.548289    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 14:41:30.635430    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:41:30.635430    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:41:30.635430    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 14:41:33.157102    6676 main.go:141] libmachine: [stdout =====>] : 172.23.196.105
	
	I0612 14:41:33.157102    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:41:33.164096    6676 provision.go:143] copyHostCerts
	I0612 14:41:33.164298    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0612 14:41:33.164298    6676 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0612 14:41:33.164298    6676 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0612 14:41:33.164938    6676 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0612 14:41:33.166231    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0612 14:41:33.166542    6676 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0612 14:41:33.166542    6676 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0612 14:41:33.166542    6676 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0612 14:41:33.167841    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0612 14:41:33.167841    6676 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0612 14:41:33.167841    6676 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0612 14:41:33.168520    6676 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0612 14:41:33.169186    6676 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-025000-m02 san=[127.0.0.1 172.23.196.105 localhost minikube multinode-025000-m02]
	I0612 14:41:33.320296    6676 provision.go:177] copyRemoteCerts
	I0612 14:41:33.333108    6676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 14:41:33.333108    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 14:41:35.403536    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:41:35.414817    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:41:35.414817    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 14:41:37.920544    6676 main.go:141] libmachine: [stdout =====>] : 172.23.196.105
	
	I0612 14:41:37.920544    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:41:37.931620    6676 sshutil.go:53] new ssh client: &{IP:172.23.196.105 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000-m02\id_rsa Username:docker}
	I0612 14:41:38.038804    6676 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7056806s)
	I0612 14:41:38.038907    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0612 14:41:38.039341    6676 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 14:41:38.084336    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0612 14:41:38.084841    6676 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0612 14:41:38.131607    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0612 14:41:38.131951    6676 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0612 14:41:38.179715    6676 provision.go:87] duration metric: took 14.2235965s to configureAuth
	I0612 14:41:38.179817    6676 buildroot.go:189] setting minikube options for container-runtime
	I0612 14:41:38.180084    6676 config.go:182] Loaded profile config "multinode-025000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 14:41:38.180084    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 14:41:40.298068    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:41:40.298138    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:41:40.298138    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 14:41:42.768260    6676 main.go:141] libmachine: [stdout =====>] : 172.23.196.105
	
	I0612 14:41:42.768260    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:41:42.780924    6676 main.go:141] libmachine: Using SSH client type: native
	I0612 14:41:42.781691    6676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.196.105 22 <nil> <nil>}
	I0612 14:41:42.781691    6676 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0612 14:41:42.919534    6676 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0612 14:41:42.919653    6676 buildroot.go:70] root file system type: tmpfs
	I0612 14:41:42.919821    6676 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0612 14:41:42.919821    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 14:41:44.987871    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:41:44.987871    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:41:45.000097    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 14:41:47.469128    6676 main.go:141] libmachine: [stdout =====>] : 172.23.196.105
	
	I0612 14:41:47.480725    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:41:47.486644    6676 main.go:141] libmachine: Using SSH client type: native
	I0612 14:41:47.487355    6676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.196.105 22 <nil> <nil>}
	I0612 14:41:47.487355    6676 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.23.198.154"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0612 14:41:47.650211    6676 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.23.198.154
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0612 14:41:47.650211    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 14:41:49.748920    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:41:49.748920    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:41:49.748920    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 14:41:52.232081    6676 main.go:141] libmachine: [stdout =====>] : 172.23.196.105
	
	I0612 14:41:52.232081    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:41:52.250035    6676 main.go:141] libmachine: Using SSH client type: native
	I0612 14:41:52.250198    6676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.196.105 22 <nil> <nil>}
	I0612 14:41:52.250198    6676 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0612 14:41:54.358405    6676 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0612 14:41:54.358405    6676 machine.go:97] duration metric: took 44.78451s to provisionDockerMachine
	I0612 14:41:54.358405    6676 client.go:171] duration metric: took 1m53.9122174s to LocalClient.Create
	I0612 14:41:54.358405    6676 start.go:167] duration metric: took 1m53.9144714s to libmachine.API.Create "multinode-025000"
	I0612 14:41:54.358405    6676 start.go:293] postStartSetup for "multinode-025000-m02" (driver="hyperv")
	I0612 14:41:54.358405    6676 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 14:41:54.371622    6676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 14:41:54.371622    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 14:41:56.456827    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:41:56.456827    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:41:56.457235    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 14:41:58.960937    6676 main.go:141] libmachine: [stdout =====>] : 172.23.196.105
	
	I0612 14:41:58.960937    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:41:58.972254    6676 sshutil.go:53] new ssh client: &{IP:172.23.196.105 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000-m02\id_rsa Username:docker}
	I0612 14:41:59.080284    6676 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7086463s)
	I0612 14:41:59.093192    6676 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 14:41:59.100164    6676 command_runner.go:130] > NAME=Buildroot
	I0612 14:41:59.100164    6676 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0612 14:41:59.100164    6676 command_runner.go:130] > ID=buildroot
	I0612 14:41:59.100164    6676 command_runner.go:130] > VERSION_ID=2023.02.9
	I0612 14:41:59.100164    6676 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0612 14:41:59.100322    6676 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 14:41:59.100354    6676 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0612 14:41:59.100733    6676 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0612 14:41:59.101488    6676 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> 12802.pem in /etc/ssl/certs
	I0612 14:41:59.101568    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> /etc/ssl/certs/12802.pem
	I0612 14:41:59.111606    6676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 14:41:59.132096    6676 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem --> /etc/ssl/certs/12802.pem (1708 bytes)
	I0612 14:41:59.176072    6676 start.go:296] duration metric: took 4.8176511s for postStartSetup
	I0612 14:41:59.178797    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 14:42:01.227781    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:42:01.239281    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:42:01.239281    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 14:42:03.687789    6676 main.go:141] libmachine: [stdout =====>] : 172.23.196.105
	
	I0612 14:42:03.699560    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:42:03.699560    6676 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\config.json ...
	I0612 14:42:03.702846    6676 start.go:128] duration metric: took 2m3.2602871s to createHost
	I0612 14:42:03.703031    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 14:42:05.762493    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:42:05.762493    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:42:05.762700    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 14:42:08.223572    6676 main.go:141] libmachine: [stdout =====>] : 172.23.196.105
	
	I0612 14:42:08.223572    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:42:08.235791    6676 main.go:141] libmachine: Using SSH client type: native
	I0612 14:42:08.240752    6676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.196.105 22 <nil> <nil>}
	I0612 14:42:08.240752    6676 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 14:42:08.375438    6676 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718228528.379043115
	
	I0612 14:42:08.375438    6676 fix.go:216] guest clock: 1718228528.379043115
	I0612 14:42:08.375438    6676 fix.go:229] Guest: 2024-06-12 14:42:08.379043115 -0700 PDT Remote: 2024-06-12 14:42:03.7029607 -0700 PDT m=+332.438927001 (delta=4.676082415s)
	I0612 14:42:08.375438    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 14:42:10.473647    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:42:10.473844    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:42:10.473844    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 14:42:12.968170    6676 main.go:141] libmachine: [stdout =====>] : 172.23.196.105
	
	I0612 14:42:12.968170    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:42:12.981374    6676 main.go:141] libmachine: Using SSH client type: native
	I0612 14:42:12.981947    6676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.196.105 22 <nil> <nil>}
	I0612 14:42:12.981991    6676 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1718228528
	I0612 14:42:13.127324    6676 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jun 12 21:42:08 UTC 2024
	
	I0612 14:42:13.127407    6676 fix.go:236] clock set: Wed Jun 12 21:42:08 UTC 2024
	 (err=<nil>)
	I0612 14:42:13.127407    6676 start.go:83] releasing machines lock for "multinode-025000-m02", held for 2m12.6877352s
	I0612 14:42:13.127468    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 14:42:15.204993    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:42:15.216325    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:42:15.216325    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 14:42:17.721440    6676 main.go:141] libmachine: [stdout =====>] : 172.23.196.105
	
	I0612 14:42:17.732608    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:42:17.735675    6676 out.go:177] * Found network options:
	I0612 14:42:17.738101    6676 out.go:177]   - NO_PROXY=172.23.198.154
	W0612 14:42:17.741139    6676 proxy.go:119] fail to check proxy env: Error ip not in block
	I0612 14:42:17.743501    6676 out.go:177]   - NO_PROXY=172.23.198.154
	W0612 14:42:17.745897    6676 proxy.go:119] fail to check proxy env: Error ip not in block
	W0612 14:42:17.747550    6676 proxy.go:119] fail to check proxy env: Error ip not in block
	I0612 14:42:17.750861    6676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 14:42:17.750861    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 14:42:17.759173    6676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0612 14:42:17.759173    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 14:42:19.913179    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:42:19.913179    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:42:19.913179    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:42:19.914811    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:42:19.914811    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 14:42:19.914883    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 14:42:22.549268    6676 main.go:141] libmachine: [stdout =====>] : 172.23.196.105
	
	I0612 14:42:22.556272    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:42:22.556272    6676 sshutil.go:53] new ssh client: &{IP:172.23.196.105 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000-m02\id_rsa Username:docker}
	I0612 14:42:22.581224    6676 main.go:141] libmachine: [stdout =====>] : 172.23.196.105
	
	I0612 14:42:22.581224    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:42:22.583128    6676 sshutil.go:53] new ssh client: &{IP:172.23.196.105 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000-m02\id_rsa Username:docker}
	I0612 14:42:22.644365    6676 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0612 14:42:22.650411    6676 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.8911555s)
	W0612 14:42:22.650476    6676 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 14:42:22.660469    6676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 14:42:22.728937    6676 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0612 14:42:22.729833    6676 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9789556s)
	I0612 14:42:22.729952    6676 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0612 14:42:22.730033    6676 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 14:42:22.730066    6676 start.go:494] detecting cgroup driver to use...
	I0612 14:42:22.730123    6676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 14:42:22.765424    6676 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0612 14:42:22.777972    6676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0612 14:42:22.809046    6676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0612 14:42:22.829844    6676 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0612 14:42:22.843082    6676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0612 14:42:22.873624    6676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0612 14:42:22.909542    6676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0612 14:42:22.939448    6676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0612 14:42:22.970613    6676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 14:42:23.001271    6676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0612 14:42:23.033719    6676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0612 14:42:23.067009    6676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0612 14:42:23.106473    6676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 14:42:23.126791    6676 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0612 14:42:23.139694    6676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 14:42:23.168208    6676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 14:42:23.358463    6676 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0612 14:42:23.390100    6676 start.go:494] detecting cgroup driver to use...
	I0612 14:42:23.402496    6676 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0612 14:42:23.424120    6676 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0612 14:42:23.424161    6676 command_runner.go:130] > [Unit]
	I0612 14:42:23.424230    6676 command_runner.go:130] > Description=Docker Application Container Engine
	I0612 14:42:23.424230    6676 command_runner.go:130] > Documentation=https://docs.docker.com
	I0612 14:42:23.424364    6676 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0612 14:42:23.424364    6676 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0612 14:42:23.424364    6676 command_runner.go:130] > StartLimitBurst=3
	I0612 14:42:23.424427    6676 command_runner.go:130] > StartLimitIntervalSec=60
	I0612 14:42:23.424427    6676 command_runner.go:130] > [Service]
	I0612 14:42:23.424427    6676 command_runner.go:130] > Type=notify
	I0612 14:42:23.424467    6676 command_runner.go:130] > Restart=on-failure
	I0612 14:42:23.424467    6676 command_runner.go:130] > Environment=NO_PROXY=172.23.198.154
	I0612 14:42:23.424517    6676 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0612 14:42:23.424517    6676 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0612 14:42:23.424562    6676 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0612 14:42:23.424562    6676 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0612 14:42:23.424718    6676 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0612 14:42:23.424756    6676 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0612 14:42:23.424756    6676 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0612 14:42:23.424825    6676 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0612 14:42:23.424825    6676 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0612 14:42:23.424867    6676 command_runner.go:130] > ExecStart=
	I0612 14:42:23.424867    6676 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0612 14:42:23.424906    6676 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0612 14:42:23.424947    6676 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0612 14:42:23.424947    6676 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0612 14:42:23.424993    6676 command_runner.go:130] > LimitNOFILE=infinity
	I0612 14:42:23.424993    6676 command_runner.go:130] > LimitNPROC=infinity
	I0612 14:42:23.424993    6676 command_runner.go:130] > LimitCORE=infinity
	I0612 14:42:23.424993    6676 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0612 14:42:23.425073    6676 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0612 14:42:23.425073    6676 command_runner.go:130] > TasksMax=infinity
	I0612 14:42:23.425143    6676 command_runner.go:130] > TimeoutStartSec=0
	I0612 14:42:23.425143    6676 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0612 14:42:23.425143    6676 command_runner.go:130] > Delegate=yes
	I0612 14:42:23.425206    6676 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0612 14:42:23.425240    6676 command_runner.go:130] > KillMode=process
	I0612 14:42:23.425240    6676 command_runner.go:130] > [Install]
	I0612 14:42:23.425240    6676 command_runner.go:130] > WantedBy=multi-user.target
	I0612 14:42:23.437805    6676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 14:42:23.471293    6676 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 14:42:23.516456    6676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 14:42:23.558838    6676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0612 14:42:23.597309    6676 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0612 14:42:23.669436    6676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0612 14:42:23.691037    6676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 14:42:23.723738    6676 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0612 14:42:23.735780    6676 ssh_runner.go:195] Run: which cri-dockerd
	I0612 14:42:23.740068    6676 command_runner.go:130] > /usr/bin/cri-dockerd
	I0612 14:42:23.755300    6676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0612 14:42:23.773336    6676 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0612 14:42:23.817868    6676 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0612 14:42:24.020599    6676 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0612 14:42:24.217214    6676 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0612 14:42:24.217388    6676 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0612 14:42:24.261617    6676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 14:42:24.453486    6676 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0612 14:42:26.962045    6676 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5063099s)
	I0612 14:42:26.974485    6676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0612 14:42:27.013469    6676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0612 14:42:27.059432    6676 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0612 14:42:27.242929    6676 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0612 14:42:27.447708    6676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 14:42:27.646505    6676 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0612 14:42:27.688716    6676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0612 14:42:27.727260    6676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 14:42:27.910841    6676 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0612 14:42:28.014482    6676 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0612 14:42:28.027408    6676 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0612 14:42:28.039167    6676 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0612 14:42:28.039167    6676 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0612 14:42:28.039281    6676 command_runner.go:130] > Device: 0,22	Inode: 879         Links: 1
	I0612 14:42:28.039281    6676 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0612 14:42:28.039281    6676 command_runner.go:130] > Access: 2024-06-12 21:42:27.942138634 +0000
	I0612 14:42:28.039281    6676 command_runner.go:130] > Modify: 2024-06-12 21:42:27.942138634 +0000
	I0612 14:42:28.039281    6676 command_runner.go:130] > Change: 2024-06-12 21:42:27.946138632 +0000
	I0612 14:42:28.039281    6676 command_runner.go:130] >  Birth: -
	I0612 14:42:28.040170    6676 start.go:562] Will wait 60s for crictl version
	I0612 14:42:28.053992    6676 ssh_runner.go:195] Run: which crictl
	I0612 14:42:28.056943    6676 command_runner.go:130] > /usr/bin/crictl
	I0612 14:42:28.075817    6676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 14:42:28.132850    6676 command_runner.go:130] > Version:  0.1.0
	I0612 14:42:28.132850    6676 command_runner.go:130] > RuntimeName:  docker
	I0612 14:42:28.132850    6676 command_runner.go:130] > RuntimeVersion:  26.1.4
	I0612 14:42:28.132850    6676 command_runner.go:130] > RuntimeApiVersion:  v1
	I0612 14:42:28.132850    6676 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0612 14:42:28.142397    6676 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0612 14:42:28.171989    6676 command_runner.go:130] > 26.1.4
	I0612 14:42:28.183163    6676 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0612 14:42:28.213678    6676 command_runner.go:130] > 26.1.4
	I0612 14:42:28.217574    6676 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0612 14:42:28.219873    6676 out.go:177]   - env NO_PROXY=172.23.198.154
	I0612 14:42:28.222177    6676 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0612 14:42:28.225755    6676 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0612 14:42:28.225755    6676 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0612 14:42:28.225755    6676 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0612 14:42:28.225755    6676 ip.go:207] Found interface: {Index:16 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:56:a0:18 Flags:up|broadcast|multicast|running}
	I0612 14:42:28.227484    6676 ip.go:210] interface addr: fe80::52c5:dd8:dd1e:a400/64
	I0612 14:42:28.227484    6676 ip.go:210] interface addr: 172.23.192.1/20
	I0612 14:42:28.241226    6676 ssh_runner.go:195] Run: grep 172.23.192.1	host.minikube.internal$ /etc/hosts
	I0612 14:42:28.243972    6676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 14:42:28.269526    6676 mustload.go:65] Loading cluster: multinode-025000
	I0612 14:42:28.269987    6676 config.go:182] Loaded profile config "multinode-025000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 14:42:28.271009    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 14:42:30.323818    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:42:30.323818    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:42:30.339580    6676 host.go:66] Checking if "multinode-025000" exists ...
	I0612 14:42:30.340570    6676 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000 for IP: 172.23.196.105
	I0612 14:42:30.340570    6676 certs.go:194] generating shared ca certs ...
	I0612 14:42:30.340570    6676 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 14:42:30.341177    6676 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0612 14:42:30.341673    6676 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0612 14:42:30.341820    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0612 14:42:30.342069    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0612 14:42:30.342264    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0612 14:42:30.342471    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0612 14:42:30.342945    6676 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem (1338 bytes)
	W0612 14:42:30.343299    6676 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280_empty.pem, impossibly tiny 0 bytes
	I0612 14:42:30.343486    6676 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0612 14:42:30.343737    6676 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0612 14:42:30.344172    6676 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0612 14:42:30.344434    6676 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0612 14:42:30.344606    6676 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem (1708 bytes)
	I0612 14:42:30.345081    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem -> /usr/share/ca-certificates/1280.pem
	I0612 14:42:30.345297    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> /usr/share/ca-certificates/12802.pem
	I0612 14:42:30.345484    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0612 14:42:30.345788    6676 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 14:42:30.391138    6676 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 14:42:30.435791    6676 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 14:42:30.480263    6676 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0612 14:42:30.523959    6676 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem --> /usr/share/ca-certificates/1280.pem (1338 bytes)
	I0612 14:42:30.566485    6676 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem --> /usr/share/ca-certificates/12802.pem (1708 bytes)
	I0612 14:42:30.624959    6676 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 14:42:30.690513    6676 ssh_runner.go:195] Run: openssl version
	I0612 14:42:30.701330    6676 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0612 14:42:30.711264    6676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1280.pem && ln -fs /usr/share/ca-certificates/1280.pem /etc/ssl/certs/1280.pem"
	I0612 14:42:30.742571    6676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1280.pem
	I0612 14:42:30.750322    6676 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 12 20:15 /usr/share/ca-certificates/1280.pem
	I0612 14:42:30.750439    6676 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:15 /usr/share/ca-certificates/1280.pem
	I0612 14:42:30.761113    6676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1280.pem
	I0612 14:42:30.770655    6676 command_runner.go:130] > 51391683
	I0612 14:42:30.781953    6676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1280.pem /etc/ssl/certs/51391683.0"
	I0612 14:42:30.824592    6676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12802.pem && ln -fs /usr/share/ca-certificates/12802.pem /etc/ssl/certs/12802.pem"
	I0612 14:42:30.855661    6676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12802.pem
	I0612 14:42:30.862594    6676 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 12 20:15 /usr/share/ca-certificates/12802.pem
	I0612 14:42:30.862671    6676 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:15 /usr/share/ca-certificates/12802.pem
	I0612 14:42:30.873166    6676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12802.pem
	I0612 14:42:30.876501    6676 command_runner.go:130] > 3ec20f2e
	I0612 14:42:30.892446    6676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12802.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 14:42:30.922090    6676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 14:42:30.952840    6676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 14:42:30.960636    6676 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 12 20:00 /usr/share/ca-certificates/minikubeCA.pem
	I0612 14:42:30.960726    6676 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:00 /usr/share/ca-certificates/minikubeCA.pem
	I0612 14:42:30.971751    6676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 14:42:30.984605    6676 command_runner.go:130] > b5213941
	I0612 14:42:30.995968    6676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 14:42:31.024447    6676 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 14:42:31.027751    6676 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0612 14:42:31.033344    6676 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0612 14:42:31.033524    6676 kubeadm.go:928] updating node {m02 172.23.196.105 8443 v1.30.1 docker false true} ...
	I0612 14:42:31.033656    6676 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-025000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.196.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-025000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 14:42:31.044630    6676 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 14:42:31.062651    6676 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	I0612 14:42:31.064930    6676 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0612 14:42:31.076856    6676 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0612 14:42:31.096760    6676 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0612 14:42:31.096760    6676 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0612 14:42:31.096760    6676 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0612 14:42:31.096760    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0612 14:42:31.096760    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0612 14:42:31.112457    6676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 14:42:31.114067    6676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0612 14:42:31.114571    6676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0612 14:42:31.133708    6676 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0612 14:42:31.138479    6676 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0612 14:42:31.138479    6676 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0612 14:42:31.138562    6676 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0612 14:42:31.138665    6676 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0612 14:42:31.138665    6676 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0612 14:42:31.138829    6676 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0612 14:42:31.153470    6676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0612 14:42:31.260933    6676 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0612 14:42:31.261017    6676 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0612 14:42:31.261196    6676 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0612 14:42:32.525891    6676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0612 14:42:32.544679    6676 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0612 14:42:32.575910    6676 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 14:42:32.617522    6676 ssh_runner.go:195] Run: grep 172.23.198.154	control-plane.minikube.internal$ /etc/hosts
	I0612 14:42:32.625647    6676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.198.154	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 14:42:32.658489    6676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 14:42:32.866765    6676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 14:42:32.896213    6676 host.go:66] Checking if "multinode-025000" exists ...
	I0612 14:42:32.896993    6676 start.go:316] joinCluster: &{Name:multinode-025000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-025000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.198.154 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.196.105 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 14:42:32.896993    6676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0612 14:42:32.896993    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 14:42:35.038045    6676 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:42:35.038045    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:42:35.049484    6676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 14:42:37.533934    6676 main.go:141] libmachine: [stdout =====>] : 172.23.198.154
	
	I0612 14:42:37.545718    6676 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:42:37.546049    6676 sshutil.go:53] new ssh client: &{IP:172.23.198.154 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000\id_rsa Username:docker}
	I0612 14:42:37.729469    6676 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token q3yklt.xcwq8kgjjry2hv3c --discovery-token-ca-cert-hash sha256:10c04e0412ada9d72a46398cbb6ecb6de5efcad2d747fb615b7e984406c55dc5 
	I0612 14:42:37.729587    6676 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.8325773s)
	I0612 14:42:37.729587    6676 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.23.196.105 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0612 14:42:37.729587    6676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token q3yklt.xcwq8kgjjry2hv3c --discovery-token-ca-cert-hash sha256:10c04e0412ada9d72a46398cbb6ecb6de5efcad2d747fb615b7e984406c55dc5 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-025000-m02"
	I0612 14:42:37.925108    6676 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 14:42:39.269262    6676 command_runner.go:130] > [preflight] Running pre-flight checks
	I0612 14:42:39.269317    6676 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0612 14:42:39.269355    6676 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0612 14:42:39.269355    6676 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 14:42:39.269355    6676 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 14:42:39.269355    6676 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0612 14:42:39.269355    6676 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0612 14:42:39.269355    6676 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.002253141s
	I0612 14:42:39.269355    6676 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0612 14:42:39.269355    6676 command_runner.go:130] > This node has joined the cluster:
	I0612 14:42:39.269355    6676 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0612 14:42:39.269355    6676 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0612 14:42:39.269355    6676 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0612 14:42:39.269355    6676 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token q3yklt.xcwq8kgjjry2hv3c --discovery-token-ca-cert-hash sha256:10c04e0412ada9d72a46398cbb6ecb6de5efcad2d747fb615b7e984406c55dc5 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-025000-m02": (1.539763s)
	I0612 14:42:39.269355    6676 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0612 14:42:39.469879    6676 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0612 14:42:39.671725    6676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-025000-m02 minikube.k8s.io/updated_at=2024_06_12T14_42_39_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97 minikube.k8s.io/name=multinode-025000 minikube.k8s.io/primary=false
	I0612 14:42:39.796746    6676 command_runner.go:130] > node/multinode-025000-m02 labeled
	I0612 14:42:39.796891    6676 start.go:318] duration metric: took 6.899875s to joinCluster
	I0612 14:42:39.796958    6676 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.23.196.105 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0612 14:42:39.800129    6676 out.go:177] * Verifying Kubernetes components...
	I0612 14:42:39.797780    6676 config.go:182] Loaded profile config "multinode-025000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 14:42:39.812584    6676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 14:42:40.007092    6676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 14:42:40.035745    6676 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 14:42:40.036511    6676 kapi.go:59] client config for multinode-025000: &rest.Config{Host:"https://172.23.198.154:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-025000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-025000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x288e1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0612 14:42:40.037349    6676 node_ready.go:35] waiting up to 6m0s for node "multinode-025000-m02" to be "Ready" ...
	I0612 14:42:40.038020    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:40.038020    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:40.038020    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:40.038020    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:40.048378    6676 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0612 14:42:40.048378    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:40.048378    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:40.048378    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:40.048378    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:40.048378    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:40.048378    6676 round_trippers.go:580]     Content-Length: 4030
	I0612 14:42:40.052207    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:40 GMT
	I0612 14:42:40.052207    6676 round_trippers.go:580]     Audit-Id: d2f4cef6-50ec-441d-8847-4894d04e7a3f
	I0612 14:42:40.052251    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"616","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0612 14:42:40.546019    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:40.546019    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:40.546019    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:40.546019    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:40.550939    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:42:40.550939    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:40.550939    6676 round_trippers.go:580]     Audit-Id: fc8aafc0-e668-4100-8301-43c51d707719
	I0612 14:42:40.550939    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:40.550939    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:40.550939    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:40.550939    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:40.550939    6676 round_trippers.go:580]     Content-Length: 4030
	I0612 14:42:40.550939    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:40 GMT
	I0612 14:42:40.550939    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"616","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0612 14:42:41.040473    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:41.040713    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:41.040713    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:41.040713    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:41.041098    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:42:41.044989    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:41.044989    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:41.045074    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:41.045074    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:41.045101    6676 round_trippers.go:580]     Content-Length: 4030
	I0612 14:42:41.045101    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:41 GMT
	I0612 14:42:41.045101    6676 round_trippers.go:580]     Audit-Id: 8a0f2835-8c4e-4844-be14-b44d60551602
	I0612 14:42:41.045101    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:41.045263    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"616","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0612 14:42:41.549422    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:41.549422    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:41.549691    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:41.549691    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:41.549821    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:42:41.549821    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:41.553455    6676 round_trippers.go:580]     Content-Length: 4030
	I0612 14:42:41.553455    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:41 GMT
	I0612 14:42:41.553455    6676 round_trippers.go:580]     Audit-Id: 923a68c0-9776-4ca3-af30-ca7f311d3132
	I0612 14:42:41.553455    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:41.553455    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:41.553455    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:41.553455    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:41.553585    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"616","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0612 14:42:42.052284    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:42.052284    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:42.052284    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:42.052284    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:42.056800    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:42:42.056800    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:42.056800    6676 round_trippers.go:580]     Audit-Id: e95ff560-102e-4ae6-91a8-3823da040acd
	I0612 14:42:42.056879    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:42.056879    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:42.056879    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:42.056879    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:42.056879    6676 round_trippers.go:580]     Content-Length: 4030
	I0612 14:42:42.056879    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:42 GMT
	I0612 14:42:42.057010    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"616","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0612 14:42:42.057486    6676 node_ready.go:53] node "multinode-025000-m02" has status "Ready":"False"
	I0612 14:42:42.550689    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:42.550740    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:42.550740    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:42.550740    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:42.551305    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:42:42.551305    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:42.551305    6676 round_trippers.go:580]     Audit-Id: 871d3aee-09e9-44c6-a3a9-125644c8d3e0
	I0612 14:42:42.551305    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:42.551305    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:42.555410    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:42.555410    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:42.555410    6676 round_trippers.go:580]     Content-Length: 4030
	I0612 14:42:42.555410    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:42 GMT
	I0612 14:42:42.555661    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"616","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0612 14:42:43.044033    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:43.044117    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:43.044117    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:43.044117    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:43.044507    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:42:43.044507    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:43.044507    6676 round_trippers.go:580]     Audit-Id: 2d9e9dbf-c913-4aeb-bf0f-ee09214c6f67
	I0612 14:42:43.044507    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:43.044507    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:43.044507    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:43.044507    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:43.044507    6676 round_trippers.go:580]     Content-Length: 4030
	I0612 14:42:43.044507    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:43 GMT
	I0612 14:42:43.044507    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"616","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0612 14:42:43.544541    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:43.544618    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:43.544618    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:43.544618    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:43.545163    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:42:43.548611    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:43.548611    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:43.548611    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:43.548611    6676 round_trippers.go:580]     Content-Length: 4030
	I0612 14:42:43.548611    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:43 GMT
	I0612 14:42:43.548611    6676 round_trippers.go:580]     Audit-Id: f893aef4-180e-4e9a-a1ed-497e28e1cca0
	I0612 14:42:43.548611    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:43.548611    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:43.548805    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"616","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0612 14:42:44.050635    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:44.050859    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:44.050859    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:44.050859    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:44.051453    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:42:44.051453    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:44.051453    6676 round_trippers.go:580]     Content-Length: 4030
	I0612 14:42:44.051453    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:44 GMT
	I0612 14:42:44.051453    6676 round_trippers.go:580]     Audit-Id: 6f2f2dfc-08e0-4e5f-ba41-7490cb5c8d68
	I0612 14:42:44.051453    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:44.051453    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:44.051453    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:44.051453    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:44.055309    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"616","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0612 14:42:44.557152    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:44.557152    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:44.557278    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:44.557317    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:44.561080    6676 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 14:42:44.561165    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:44.561165    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:44.561165    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:44.561165    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:44.561165    6676 round_trippers.go:580]     Content-Length: 4030
	I0612 14:42:44.561165    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:44 GMT
	I0612 14:42:44.561236    6676 round_trippers.go:580]     Audit-Id: d12c905f-8b76-4804-95b4-d9db74e24e9a
	I0612 14:42:44.561236    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:44.561427    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"616","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0612 14:42:44.561925    6676 node_ready.go:53] node "multinode-025000-m02" has status "Ready":"False"
	I0612 14:42:45.041548    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:45.041548    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:45.041548    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:45.041650    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:45.046455    6676 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 14:42:45.046566    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:45.046566    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:45.046566    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:45.046621    6676 round_trippers.go:580]     Content-Length: 4030
	I0612 14:42:45.046621    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:45 GMT
	I0612 14:42:45.046621    6676 round_trippers.go:580]     Audit-Id: bde8350a-0389-4c6f-8842-a5514dee201c
	I0612 14:42:45.046651    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:45.046651    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:45.047450    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"616","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0612 14:42:45.544998    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:45.544998    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:45.544998    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:45.544998    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:45.550464    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:42:45.550464    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:45.550464    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:45.550464    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:45.550464    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:45.550464    6676 round_trippers.go:580]     Content-Length: 4030
	I0612 14:42:45.550464    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:45 GMT
	I0612 14:42:45.550464    6676 round_trippers.go:580]     Audit-Id: c1235618-b104-4b18-9cea-70ebfcd5bac7
	I0612 14:42:45.550600    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:45.550694    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"616","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0612 14:42:46.043461    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:46.043461    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:46.043461    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:46.043461    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:46.046719    6676 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 14:42:46.048150    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:46.048150    6676 round_trippers.go:580]     Content-Length: 4030
	I0612 14:42:46.048150    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:46 GMT
	I0612 14:42:46.048150    6676 round_trippers.go:580]     Audit-Id: e48a64d7-b105-412a-9a8a-58d5abe5bc30
	I0612 14:42:46.048150    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:46.048150    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:46.048150    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:46.048150    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:46.048404    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"616","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0612 14:42:46.540608    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:46.540608    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:46.540608    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:46.540608    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:46.541246    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:42:46.541246    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:46.541246    6676 round_trippers.go:580]     Audit-Id: 09ca03d7-5dc4-4432-a2d2-4f414b39b419
	I0612 14:42:46.545040    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:46.545040    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:46.545040    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:46.545040    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:46.545040    6676 round_trippers.go:580]     Content-Length: 4030
	I0612 14:42:46.545040    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:46 GMT
	I0612 14:42:46.545094    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"616","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0612 14:42:47.041008    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:47.041008    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:47.041106    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:47.041106    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:47.044099    6676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 14:42:47.044099    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:47.044099    6676 round_trippers.go:580]     Audit-Id: 60d5caed-8641-4742-8f67-037d2fe55ba4
	I0612 14:42:47.044099    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:47.044099    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:47.044099    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:47.044099    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:47.044099    6676 round_trippers.go:580]     Content-Length: 4030
	I0612 14:42:47.044099    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:47 GMT
	I0612 14:42:47.045320    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"616","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0612 14:42:47.045765    6676 node_ready.go:53] node "multinode-025000-m02" has status "Ready":"False"
	I0612 14:42:47.538381    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:47.538821    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:47.538821    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:47.538821    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:47.543270    6676 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 14:42:47.543270    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:47.543270    6676 round_trippers.go:580]     Content-Length: 4030
	I0612 14:42:47.543270    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:47 GMT
	I0612 14:42:47.543270    6676 round_trippers.go:580]     Audit-Id: 87f9f65c-7dc4-4955-b2cf-2ac69c5a748e
	I0612 14:42:47.543270    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:47.543270    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:47.543270    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:47.543270    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:47.543270    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"616","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0612 14:42:48.053305    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:48.053305    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:48.053305    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:48.053305    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:48.054007    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:42:48.054007    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:48.054007    6676 round_trippers.go:580]     Content-Length: 4030
	I0612 14:42:48.057821    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:48 GMT
	I0612 14:42:48.057821    6676 round_trippers.go:580]     Audit-Id: fd17b69b-6699-4f6a-b189-06c7e4abe6c3
	I0612 14:42:48.057821    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:48.057821    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:48.057821    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:48.057821    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:48.058050    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"616","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0612 14:42:48.550260    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:48.550482    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:48.550482    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:48.550482    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:48.552398    6676 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 14:42:48.552398    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:48.552398    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:48.552398    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:48.552398    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:48.552398    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:48.554550    6676 round_trippers.go:580]     Content-Length: 4030
	I0612 14:42:48.554550    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:48 GMT
	I0612 14:42:48.554550    6676 round_trippers.go:580]     Audit-Id: 83022ce7-6b93-420c-9155-13ed732f4dd9
	I0612 14:42:48.554715    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"616","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0612 14:42:49.045432    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:49.045617    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:49.045617    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:49.045617    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:49.049517    6676 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 14:42:49.049517    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:49.049517    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:49.049517    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:49.049517    6676 round_trippers.go:580]     Content-Length: 4030
	I0612 14:42:49.049517    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:49 GMT
	I0612 14:42:49.049517    6676 round_trippers.go:580]     Audit-Id: e0d1ef31-eb49-48ba-b644-b97a05995278
	I0612 14:42:49.049517    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:49.049517    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:49.052640    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"616","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0612 14:42:49.052640    6676 node_ready.go:53] node "multinode-025000-m02" has status "Ready":"False"
	I0612 14:42:49.553367    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:49.553367    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:49.553367    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:49.553367    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:49.727109    6676 round_trippers.go:574] Response Status: 200 OK in 173 milliseconds
	I0612 14:42:49.734027    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:49.734027    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:49 GMT
	I0612 14:42:49.734027    6676 round_trippers.go:580]     Audit-Id: f9217afd-147d-4b67-8bf4-d68868eee2ca
	I0612 14:42:49.734027    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:49.734027    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:49.734027    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:49.734027    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:49.734027    6676 round_trippers.go:580]     Content-Length: 4030
	I0612 14:42:49.734839    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"616","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3006 chars]
	I0612 14:42:50.038720    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:50.038720    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:50.038720    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:50.038720    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:50.070593    6676 round_trippers.go:574] Response Status: 200 OK in 31 milliseconds
	I0612 14:42:50.070593    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:50.070593    6676 round_trippers.go:580]     Audit-Id: 769a9278-5954-482f-bedd-d6cb54f46398
	I0612 14:42:50.070593    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:50.070593    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:50.076198    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:50.076198    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:50.076198    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:50 GMT
	I0612 14:42:50.076428    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"631","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3398 chars]
	I0612 14:42:50.555595    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:50.555657    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:50.555657    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:50.555657    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:50.563589    6676 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 14:42:50.563589    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:50.563589    6676 round_trippers.go:580]     Audit-Id: 22cc8c45-c514-4ccb-9355-7384e4457ab6
	I0612 14:42:50.563589    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:50.563589    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:50.563589    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:50.563589    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:50.563589    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:50 GMT
	I0612 14:42:50.564312    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"631","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3398 chars]
	I0612 14:42:51.053144    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:51.053144    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:51.053144    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:51.053286    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:51.062185    6676 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0612 14:42:51.062185    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:51.062185    6676 round_trippers.go:580]     Audit-Id: 80216dcf-1432-4710-9b90-f811d675f60f
	I0612 14:42:51.062185    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:51.062185    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:51.062185    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:51.062185    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:51.062185    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:51 GMT
	I0612 14:42:51.062944    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"631","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3398 chars]
	I0612 14:42:51.062944    6676 node_ready.go:53] node "multinode-025000-m02" has status "Ready":"False"
	I0612 14:42:51.547619    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:51.547619    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:51.547619    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:51.547619    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:51.548222    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:42:51.551337    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:51.551420    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:51 GMT
	I0612 14:42:51.551420    6676 round_trippers.go:580]     Audit-Id: 35de805f-2514-4219-98c2-245207f96808
	I0612 14:42:51.551420    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:51.551420    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:51.551420    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:51.551420    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:51.551420    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"631","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3398 chars]
	I0612 14:42:52.055933    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:52.055933    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:52.055997    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:52.055997    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:52.059027    6676 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 14:42:52.059027    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:52.059027    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:52.059027    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:52 GMT
	I0612 14:42:52.059027    6676 round_trippers.go:580]     Audit-Id: c9f19bf6-52ee-43c9-a0de-2e2e4c2ee80e
	I0612 14:42:52.059027    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:52.059027    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:52.059027    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:52.059778    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"631","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3398 chars]
	I0612 14:42:52.546493    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:52.546695    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:52.546695    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:52.546695    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:52.547038    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:42:52.550856    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:52.550856    6676 round_trippers.go:580]     Audit-Id: b77a3fab-dc91-4097-b17d-f7bf8858e1ca
	I0612 14:42:52.550950    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:52.550950    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:52.551012    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:52.551012    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:52.551012    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:52 GMT
	I0612 14:42:52.551167    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"631","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3398 chars]
	I0612 14:42:53.056826    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:53.056826    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:53.056826    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:53.056826    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:53.059529    6676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 14:42:53.060437    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:53.060437    6676 round_trippers.go:580]     Audit-Id: dcfeaac4-5157-4c72-9d5b-af2f789f9542
	I0612 14:42:53.060437    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:53.060437    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:53.060437    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:53.060484    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:53.060484    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:53 GMT
	I0612 14:42:53.060657    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"631","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3398 chars]
	I0612 14:42:53.552738    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:53.552738    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:53.552738    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:53.552738    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:53.553365    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:42:53.553365    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:53.553365    6676 round_trippers.go:580]     Audit-Id: ce11430f-179e-4081-b755-df39d60963b0
	I0612 14:42:53.553365    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:53.553365    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:53.553365    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:53.553365    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:53.553365    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:53 GMT
	I0612 14:42:53.553365    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"631","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3398 chars]
	I0612 14:42:53.553365    6676 node_ready.go:53] node "multinode-025000-m02" has status "Ready":"False"
	I0612 14:42:54.039566    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:54.039621    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:54.039621    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:54.039681    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:54.042045    6676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 14:42:54.042045    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:54.042045    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:54.042045    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:54.043706    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:54.043706    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:54 GMT
	I0612 14:42:54.043706    6676 round_trippers.go:580]     Audit-Id: b37a6ab2-fa61-4968-a0e1-0bed1547d115
	I0612 14:42:54.043706    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:54.043814    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"631","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3398 chars]
	I0612 14:42:54.548717    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:54.548851    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:54.548851    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:54.548851    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:54.552948    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:42:54.552948    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:54.552948    6676 round_trippers.go:580]     Audit-Id: 5d8e6a7a-da69-4a46-b827-897c38a6b13b
	I0612 14:42:54.552948    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:54.552948    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:54.552948    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:54.552948    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:54.552948    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:54 GMT
	I0612 14:42:54.553211    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"631","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3398 chars]
	I0612 14:42:55.048442    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:55.048627    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:55.048627    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:55.048627    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:55.049319    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:42:55.052603    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:55.052603    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:55.052603    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:55.052603    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:55.052766    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:55 GMT
	I0612 14:42:55.052766    6676 round_trippers.go:580]     Audit-Id: 0b6ce809-6fab-4df2-8d28-87bff98b60d1
	I0612 14:42:55.052766    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:55.052892    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"631","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3398 chars]
	I0612 14:42:55.539430    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:55.539430    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:55.539554    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:55.539554    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:55.546318    6676 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 14:42:55.546513    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:55.546558    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:55.546558    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:55.546558    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:55.546558    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:55 GMT
	I0612 14:42:55.546608    6676 round_trippers.go:580]     Audit-Id: 01c3de3a-2222-446a-88e5-03ce62e3003b
	I0612 14:42:55.546642    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:55.546642    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"631","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3398 chars]
	I0612 14:42:56.040267    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:56.040460    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:56.040460    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:56.040460    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:56.041135    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:42:56.041135    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:56.044317    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:56.044317    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:56.044317    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:56 GMT
	I0612 14:42:56.044317    6676 round_trippers.go:580]     Audit-Id: 31a684ff-4db2-4819-99b1-df479f2a285c
	I0612 14:42:56.044317    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:56.044317    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:56.044973    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"631","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3398 chars]
	I0612 14:42:56.045502    6676 node_ready.go:53] node "multinode-025000-m02" has status "Ready":"False"
	I0612 14:42:56.538647    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:56.538733    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:56.538733    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:56.538733    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:56.539156    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:42:56.542669    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:56.542669    6676 round_trippers.go:580]     Audit-Id: 6d7ffbe9-3bee-488b-95de-59573276c0ae
	I0612 14:42:56.542669    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:56.542669    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:56.542669    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:56.542669    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:56.542740    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:56 GMT
	I0612 14:42:56.543022    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"631","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3398 chars]
	I0612 14:42:57.040740    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:57.040740    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:57.040740    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:57.040740    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:57.041278    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:42:57.046093    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:57.046093    6676 round_trippers.go:580]     Audit-Id: 41f83088-5aed-40fd-8750-9795d6c4f789
	I0612 14:42:57.046093    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:57.046093    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:57.046093    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:57.046093    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:57.046093    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:57 GMT
	I0612 14:42:57.046434    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"631","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3398 chars]
	I0612 14:42:57.547387    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:57.547387    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:57.547387    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:57.547387    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:57.548045    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:42:57.548045    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:57.548045    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:57 GMT
	I0612 14:42:57.548045    6676 round_trippers.go:580]     Audit-Id: db8c4abc-f918-419b-9892-ef04a76dee76
	I0612 14:42:57.552107    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:57.552107    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:57.552107    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:57.552107    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:57.552505    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"631","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3398 chars]
	I0612 14:42:58.042775    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:58.042775    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:58.042861    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:58.042861    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:58.043704    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:42:58.047855    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:58.047855    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:58 GMT
	I0612 14:42:58.047855    6676 round_trippers.go:580]     Audit-Id: 91aee2e9-d1ad-41a5-9d64-d14be116c750
	I0612 14:42:58.047855    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:58.047855    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:58.047855    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:58.047855    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:58.048415    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"631","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3398 chars]
	I0612 14:42:58.048907    6676 node_ready.go:53] node "multinode-025000-m02" has status "Ready":"False"
	I0612 14:42:58.549488    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:58.549726    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:58.549726    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:58.549726    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:58.550046    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:42:58.553171    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:58.553171    6676 round_trippers.go:580]     Audit-Id: 172bbdba-2918-403c-84a8-222686abbe50
	I0612 14:42:58.553171    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:58.553171    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:58.553171    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:58.553171    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:58.553171    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:58 GMT
	I0612 14:42:58.553397    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"652","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3264 chars]
	I0612 14:42:58.553931    6676 node_ready.go:49] node "multinode-025000-m02" has status "Ready":"True"
	I0612 14:42:58.553931    6676 node_ready.go:38] duration metric: took 18.5165207s for node "multinode-025000-m02" to be "Ready" ...
	I0612 14:42:58.553931    6676 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 14:42:58.554076    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/namespaces/kube-system/pods
	I0612 14:42:58.554155    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:58.554155    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:58.554155    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:58.554339    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:42:58.554339    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:58.558810    6676 round_trippers.go:580]     Audit-Id: 3a0c1e98-bf94-43a3-aac6-d260fd712bb3
	I0612 14:42:58.558810    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:58.558810    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:58.558810    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:58.558810    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:58.558875    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:58 GMT
	I0612 14:42:58.560608    6676 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"652"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"456","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 70486 chars]
	I0612 14:42:58.564204    6676 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace to be "Ready" ...
	I0612 14:42:58.564204    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 14:42:58.564204    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:58.564204    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:58.564204    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:58.565877    6676 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 14:42:58.565877    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:58.565877    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:58.565877    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:58.565877    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:58.568164    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:58.568164    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:58 GMT
	I0612 14:42:58.568164    6676 round_trippers.go:580]     Audit-Id: 20ee67b4-5634-46a2-ba4f-3fbacbe76841
	I0612 14:42:58.568385    6676 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"456","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0612 14:42:58.568682    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:42:58.568682    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:58.568682    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:58.568682    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:58.570937    6676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 14:42:58.570937    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:58.571951    6676 round_trippers.go:580]     Audit-Id: e7f558e7-c9ce-4925-af70-72ba2b20d92d
	I0612 14:42:58.571951    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:58.571951    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:58.572012    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:58.572012    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:58.572012    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:58 GMT
	I0612 14:42:58.572012    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"464","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0612 14:42:58.572714    6676 pod_ready.go:92] pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace has status "Ready":"True"
	I0612 14:42:58.572714    6676 pod_ready.go:81] duration metric: took 8.5102ms for pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace to be "Ready" ...
	I0612 14:42:58.572714    6676 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 14:42:58.572714    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-025000
	I0612 14:42:58.572714    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:58.572714    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:58.572714    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:58.573431    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:42:58.573431    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:58.573431    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:58.573431    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:58.573431    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:58 GMT
	I0612 14:42:58.573431    6676 round_trippers.go:580]     Audit-Id: c3852ddd-f760-438a-9a65-a8e270be2725
	I0612 14:42:58.573431    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:58.573431    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:58.576182    6676 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-025000","namespace":"kube-system","uid":"630bafc4-4576-4974-b638-7ab52dcfec18","resourceVersion":"416","creationTimestamp":"2024-06-12T21:39:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.198.154:2379","kubernetes.io/config.hash":"04dcbc8e258f964f689941b6844769d9","kubernetes.io/config.mirror":"04dcbc8e258f964f689941b6844769d9","kubernetes.io/config.seen":"2024-06-12T21:39:23.999683415Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0612 14:42:58.576773    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:42:58.576856    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:58.576856    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:58.576856    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:58.577053    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:42:58.579541    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:58.579541    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:58.579541    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:58.579541    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:58.579541    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:58 GMT
	I0612 14:42:58.579541    6676 round_trippers.go:580]     Audit-Id: 57643148-10d4-43ee-a1a2-62ce8246cbc9
	I0612 14:42:58.579541    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:58.579867    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"464","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0612 14:42:58.580380    6676 pod_ready.go:92] pod "etcd-multinode-025000" in "kube-system" namespace has status "Ready":"True"
	I0612 14:42:58.580464    6676 pod_ready.go:81] duration metric: took 7.7495ms for pod "etcd-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 14:42:58.580464    6676 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 14:42:58.580567    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-025000
	I0612 14:42:58.580659    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:58.580659    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:58.580659    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:58.581628    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:42:58.581628    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:58.581628    6676 round_trippers.go:580]     Audit-Id: e65caa60-2829-4f08-afc5-1f94397f96c6
	I0612 14:42:58.581628    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:58.581628    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:58.581628    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:58.581628    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:58.583659    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:58 GMT
	I0612 14:42:58.583914    6676 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-025000","namespace":"kube-system","uid":"6b429685-b322-4b00-83fc-743786ff40e1","resourceVersion":"418","creationTimestamp":"2024-06-12T21:39:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.198.154:8443","kubernetes.io/config.hash":"610414aa8160848c0b6b79ea0a700b83","kubernetes.io/config.mirror":"610414aa8160848c0b6b79ea0a700b83","kubernetes.io/config.seen":"2024-06-12T21:39:31.214464964Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0612 14:42:58.584147    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:42:58.584147    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:58.584147    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:58.584147    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:58.584813    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:42:58.584813    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:58.584813    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:58.584813    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:58.584813    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:58.584813    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:58 GMT
	I0612 14:42:58.584813    6676 round_trippers.go:580]     Audit-Id: 06da98b4-c69b-40e0-9e9f-528569df0bce
	I0612 14:42:58.587315    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:58.587697    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"464","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0612 14:42:58.587998    6676 pod_ready.go:92] pod "kube-apiserver-multinode-025000" in "kube-system" namespace has status "Ready":"True"
	I0612 14:42:58.587998    6676 pod_ready.go:81] duration metric: took 7.5341ms for pod "kube-apiserver-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 14:42:58.587998    6676 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 14:42:58.587998    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-025000
	I0612 14:42:58.587998    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:58.587998    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:58.587998    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:58.590922    6676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 14:42:58.590922    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:58.590922    6676 round_trippers.go:580]     Audit-Id: 777142e8-57ee-4b9d-afdc-af778cf6087d
	I0612 14:42:58.590922    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:58.590922    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:58.590922    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:58.590922    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:58.590922    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:58 GMT
	I0612 14:42:58.591902    6676 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-025000","namespace":"kube-system","uid":"68c9aa4f-49ee-439c-ad51-7943e65c0085","resourceVersion":"417","creationTimestamp":"2024-06-12T21:39:30Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"88de11d8b1aaec126153d44e87c4b5dd","kubernetes.io/config.mirror":"88de11d8b1aaec126153d44e87c4b5dd","kubernetes.io/config.seen":"2024-06-12T21:39:23.999674614Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0612 14:42:58.592917    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:42:58.592949    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:58.592985    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:58.592985    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:58.595003    6676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 14:42:58.595003    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:58.595003    6676 round_trippers.go:580]     Audit-Id: 49fdd465-d7e2-4977-af7d-39c82f4e1053
	I0612 14:42:58.595003    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:58.595003    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:58.595003    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:58.595732    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:58.595732    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:58 GMT
	I0612 14:42:58.595937    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"464","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0612 14:42:58.595937    6676 pod_ready.go:92] pod "kube-controller-manager-multinode-025000" in "kube-system" namespace has status "Ready":"True"
	I0612 14:42:58.595937    6676 pod_ready.go:81] duration metric: took 7.9397ms for pod "kube-controller-manager-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 14:42:58.595937    6676 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-47lr8" in "kube-system" namespace to be "Ready" ...
	I0612 14:42:58.761211    6676 request.go:629] Waited for 165.0475ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.198.154:8443/api/v1/namespaces/kube-system/pods/kube-proxy-47lr8
	I0612 14:42:58.761397    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/namespaces/kube-system/pods/kube-proxy-47lr8
	I0612 14:42:58.761397    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:58.761397    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:58.761397    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:58.769583    6676 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0612 14:42:58.769672    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:58.769672    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:58.769672    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:58.769672    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:58.769672    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:58.769672    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:58 GMT
	I0612 14:42:58.769672    6676 round_trippers.go:580]     Audit-Id: a7bf6bfc-8127-483d-8948-0c5c46a96113
	I0612 14:42:58.769672    6676 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-47lr8","generateName":"kube-proxy-","namespace":"kube-system","uid":"10b24fa7-8eea-4fbb-ab18-404e853aa7ab","resourceVersion":"411","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b44c21bc-e2cc-415b-bc2f-616adabe0681","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b44c21bc-e2cc-415b-bc2f-616adabe0681\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0612 14:42:58.953805    6676 request.go:629] Waited for 182.6437ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:42:58.954179    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:42:58.954179    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:58.954179    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:58.954179    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:58.954548    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:42:58.958258    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:58.958258    6676 round_trippers.go:580]     Audit-Id: ed9f6a51-5f56-4b11-8bc4-d156536c0e18
	I0612 14:42:58.958258    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:58.958258    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:58.958258    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:58.958258    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:58.958258    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:58 GMT
	I0612 14:42:58.958412    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"464","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0612 14:42:58.959010    6676 pod_ready.go:92] pod "kube-proxy-47lr8" in "kube-system" namespace has status "Ready":"True"
	I0612 14:42:58.959170    6676 pod_ready.go:81] duration metric: took 363.2314ms for pod "kube-proxy-47lr8" in "kube-system" namespace to be "Ready" ...
	I0612 14:42:58.959170    6676 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tdcdp" in "kube-system" namespace to be "Ready" ...
	I0612 14:42:59.166111    6676 request.go:629] Waited for 206.1591ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.198.154:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tdcdp
	I0612 14:42:59.166111    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tdcdp
	I0612 14:42:59.166111    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:59.166111    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:59.166111    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:59.166862    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:42:59.171521    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:59.171521    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:59.171521    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:59.171521    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:59.171521    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:59 GMT
	I0612 14:42:59.171521    6676 round_trippers.go:580]     Audit-Id: c7c0f20c-dacd-48a8-8f3e-6e71b1241e75
	I0612 14:42:59.171521    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:59.171521    6676 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tdcdp","generateName":"kube-proxy-","namespace":"kube-system","uid":"b623833c-ce55-46b1-a840-99b3143adac1","resourceVersion":"637","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b44c21bc-e2cc-415b-bc2f-616adabe0681","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b44c21bc-e2cc-415b-bc2f-616adabe0681\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0612 14:42:59.352651    6676 request.go:629] Waited for 180.1674ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:59.352838    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000-m02
	I0612 14:42:59.352838    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:59.352838    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:59.352838    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:59.353590    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:42:59.353590    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:59.353590    6676 round_trippers.go:580]     Audit-Id: ddf8a8d1-05aa-45b0-ad5b-fa0437e5b91b
	I0612 14:42:59.353590    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:59.353590    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:59.353590    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:59.353590    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:59.353590    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:59 GMT
	I0612 14:42:59.357470    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"652","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3264 chars]
	I0612 14:42:59.358037    6676 pod_ready.go:92] pod "kube-proxy-tdcdp" in "kube-system" namespace has status "Ready":"True"
	I0612 14:42:59.358261    6676 pod_ready.go:81] duration metric: took 399.0902ms for pod "kube-proxy-tdcdp" in "kube-system" namespace to be "Ready" ...
	I0612 14:42:59.358351    6676 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 14:42:59.555301    6676 request.go:629] Waited for 196.9041ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.198.154:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-025000
	I0612 14:42:59.555363    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-025000
	I0612 14:42:59.555363    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:59.555363    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:59.555363    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:59.556076    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:42:59.560271    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:59.560271    6676 round_trippers.go:580]     Audit-Id: 03116753-cf19-43e5-8b4f-465de90388cb
	I0612 14:42:59.560271    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:59.560271    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:59.560271    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:59.560271    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:59.560271    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:59 GMT
	I0612 14:42:59.560995    6676 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-025000","namespace":"kube-system","uid":"83b272cb-1286-47d8-bcb1-a66056dff2a5","resourceVersion":"415","creationTimestamp":"2024-06-12T21:39:31Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"de62e7fd7d0feea82620e745032c1a67","kubernetes.io/config.mirror":"de62e7fd7d0feea82620e745032c1a67","kubernetes.io/config.seen":"2024-06-12T21:39:31.214466565Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0612 14:42:59.750278    6676 request.go:629] Waited for 188.3516ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:42:59.750545    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes/multinode-025000
	I0612 14:42:59.750674    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:59.750724    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:59.750724    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:59.751437    6676 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 14:42:59.754976    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:59.754976    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:59.754976    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:59.754976    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:59 GMT
	I0612 14:42:59.754976    6676 round_trippers.go:580]     Audit-Id: dcaf26fe-cf7b-469b-a4df-4ffdf58a5e9e
	I0612 14:42:59.754976    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:59.754976    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:59.755282    6676 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"464","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0612 14:42:59.755819    6676 pod_ready.go:92] pod "kube-scheduler-multinode-025000" in "kube-system" namespace has status "Ready":"True"
	I0612 14:42:59.755874    6676 pod_ready.go:81] duration metric: took 397.5218ms for pod "kube-scheduler-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 14:42:59.755874    6676 pod_ready.go:38] duration metric: took 1.2018584s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 14:42:59.755874    6676 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 14:42:59.768306    6676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 14:42:59.794982    6676 system_svc.go:56] duration metric: took 39.108ms WaitForService to wait for kubelet
	I0612 14:42:59.795075    6676 kubeadm.go:576] duration metric: took 19.9979348s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 14:42:59.795075    6676 node_conditions.go:102] verifying NodePressure condition ...
	I0612 14:42:59.951780    6676 request.go:629] Waited for 156.3346ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.198.154:8443/api/v1/nodes
	I0612 14:42:59.952056    6676 round_trippers.go:463] GET https://172.23.198.154:8443/api/v1/nodes
	I0612 14:42:59.952056    6676 round_trippers.go:469] Request Headers:
	I0612 14:42:59.952056    6676 round_trippers.go:473]     Accept: application/json, */*
	I0612 14:42:59.952137    6676 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 14:42:59.954629    6676 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 14:42:59.960044    6676 round_trippers.go:577] Response Headers:
	I0612 14:42:59.960044    6676 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 14:42:59.960044    6676 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 14:42:59.960044    6676 round_trippers.go:580]     Date: Wed, 12 Jun 2024 21:42:59 GMT
	I0612 14:42:59.960044    6676 round_trippers.go:580]     Audit-Id: 0bf1a606-79d7-4c8d-95b8-d5ed96930b39
	I0612 14:42:59.960044    6676 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 14:42:59.960044    6676 round_trippers.go:580]     Content-Type: application/json
	I0612 14:42:59.960044    6676 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"654"},"items":[{"metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"464","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9149 chars]
	I0612 14:42:59.961301    6676 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 14:42:59.961301    6676 node_conditions.go:123] node cpu capacity is 2
	I0612 14:42:59.961301    6676 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 14:42:59.961301    6676 node_conditions.go:123] node cpu capacity is 2
	I0612 14:42:59.961301    6676 node_conditions.go:105] duration metric: took 166.2258ms to run NodePressure ...
	I0612 14:42:59.961301    6676 start.go:240] waiting for startup goroutines ...
	I0612 14:42:59.961301    6676 start.go:254] writing updated cluster config ...
	I0612 14:42:59.974009    6676 ssh_runner.go:195] Run: rm -f paused
	I0612 14:43:00.119985    6676 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0612 14:43:00.128017    6676 out.go:177] * Done! kubectl is now configured to use "multinode-025000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jun 12 21:39:57 multinode-025000 dockerd[1335]: time="2024-06-12T21:39:57.029355873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 21:39:57 multinode-025000 dockerd[1335]: time="2024-06-12T21:39:57.046172598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 12 21:39:57 multinode-025000 dockerd[1335]: time="2024-06-12T21:39:57.046576999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 12 21:39:57 multinode-025000 dockerd[1335]: time="2024-06-12T21:39:57.046711399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 21:39:57 multinode-025000 dockerd[1335]: time="2024-06-12T21:39:57.047087499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 21:39:57 multinode-025000 cri-dockerd[1234]: time="2024-06-12T21:39:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5b9e051df48486e732da2c72bf2d0e3ec93cf8774632ecedd8825e656ba04a93/resolv.conf as [nameserver 172.23.192.1]"
	Jun 12 21:39:57 multinode-025000 cri-dockerd[1234]: time="2024-06-12T21:39:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/894c58e9fe752e78b8e86cbbaabc1b6cc78ebcce37e4fc0bf1d838420f80a94d/resolv.conf as [nameserver 172.23.192.1]"
	Jun 12 21:39:57 multinode-025000 dockerd[1335]: time="2024-06-12T21:39:57.474422444Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 12 21:39:57 multinode-025000 dockerd[1335]: time="2024-06-12T21:39:57.474502944Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 12 21:39:57 multinode-025000 dockerd[1335]: time="2024-06-12T21:39:57.474539244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 21:39:57 multinode-025000 dockerd[1335]: time="2024-06-12T21:39:57.474694344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 21:39:57 multinode-025000 dockerd[1335]: time="2024-06-12T21:39:57.609132273Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 12 21:39:57 multinode-025000 dockerd[1335]: time="2024-06-12T21:39:57.609295073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 12 21:39:57 multinode-025000 dockerd[1335]: time="2024-06-12T21:39:57.609323972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 21:39:57 multinode-025000 dockerd[1335]: time="2024-06-12T21:39:57.609454572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 21:43:24 multinode-025000 dockerd[1335]: time="2024-06-12T21:43:24.701999401Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 12 21:43:24 multinode-025000 dockerd[1335]: time="2024-06-12T21:43:24.703270183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 12 21:43:24 multinode-025000 dockerd[1335]: time="2024-06-12T21:43:24.703289283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 21:43:24 multinode-025000 dockerd[1335]: time="2024-06-12T21:43:24.704319668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 21:43:25 multinode-025000 cri-dockerd[1234]: time="2024-06-12T21:43:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/84a9b747663ca262bb35bb462ba83da0c104aee08928bd92a44297ee225d4c27/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 12 21:43:26 multinode-025000 cri-dockerd[1234]: time="2024-06-12T21:43:26Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jun 12 21:43:26 multinode-025000 dockerd[1335]: time="2024-06-12T21:43:26.510596318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 12 21:43:26 multinode-025000 dockerd[1335]: time="2024-06-12T21:43:26.511589324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 12 21:43:26 multinode-025000 dockerd[1335]: time="2024-06-12T21:43:26.511604624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 21:43:26 multinode-025000 dockerd[1335]: time="2024-06-12T21:43:26.512402829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bfc0382d49a48       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   47 seconds ago      Running             busybox                   0                   84a9b747663ca       busybox-fc5497c4f-45qqd
	e83cf4eef49e4       cbb01a7bd410d                                                                                         4 minutes ago       Running             coredns                   0                   894c58e9fe752       coredns-7db6d8ff4d-vgcxw
	61910369e0d4b       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   5b9e051df4848       storage-provisioner
	4d60d82f6bc5d       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              4 minutes ago       Running             kindnet-cni               0                   92f2d5f19e95e       kindnet-bqlg8
	c4842faba751e       747097150317f                                                                                         4 minutes ago       Running             kube-proxy                0                   fad98f611536b       kube-proxy-47lr8
	6b021c195669e       a52dc94f0a912                                                                                         4 minutes ago       Running             kube-scheduler            0                   d9933fdc9ca72       kube-scheduler-multinode-025000
	2455f315465b9       3861cfcd7c04c                                                                                         4 minutes ago       Running             etcd                      0                   40443305b24f5       etcd-multinode-025000
	685d167da53c9       25a1387cdab82                                                                                         4 minutes ago       Running             kube-controller-manager   0                   bb4351fab502e       kube-controller-manager-multinode-025000
	0749f44d03561       91be940803172                                                                                         4 minutes ago       Running             kube-apiserver            0                   2784305b1d5e9       kube-apiserver-multinode-025000
	
	
	==> coredns [e83cf4eef49e] <==
	[INFO] 10.244.0.3:49605 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096301s
	[INFO] 10.244.1.2:37746 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000283001s
	[INFO] 10.244.1.2:54995 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000106501s
	[INFO] 10.244.1.2:49201 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077401s
	[INFO] 10.244.1.2:60577 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077201s
	[INFO] 10.244.1.2:36057 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000107301s
	[INFO] 10.244.1.2:43898 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064s
	[INFO] 10.244.1.2:49177 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091201s
	[INFO] 10.244.1.2:45207 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000584s
	[INFO] 10.244.0.3:36676 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151001s
	[INFO] 10.244.0.3:60305 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000305802s
	[INFO] 10.244.0.3:37468 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000209201s
	[INFO] 10.244.0.3:34743 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125201s
	[INFO] 10.244.1.2:45035 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000240801s
	[INFO] 10.244.1.2:42306 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000309601s
	[INFO] 10.244.1.2:36509 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000152901s
	[INFO] 10.244.1.2:55614 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000545s
	[INFO] 10.244.0.3:39195 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130301s
	[INFO] 10.244.0.3:34618 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000272902s
	[INFO] 10.244.0.3:44444 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000177201s
	[INFO] 10.244.0.3:35691 - 5 "PTR IN 1.192.23.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001307s
	[INFO] 10.244.1.2:51174 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110501s
	[INFO] 10.244.1.2:41925 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000207401s
	[INFO] 10.244.1.2:44306 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000736s
	[INFO] 10.244.1.2:46158 - 5 "PTR IN 1.192.23.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000547s
	
	
	==> describe nodes <==
	Name:               multinode-025000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-025000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97
	                    minikube.k8s.io/name=multinode-025000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_12T14_39_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 21:39:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-025000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 21:44:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 21:43:36 +0000   Wed, 12 Jun 2024 21:39:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 21:43:36 +0000   Wed, 12 Jun 2024 21:39:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 21:43:36 +0000   Wed, 12 Jun 2024 21:39:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 21:43:36 +0000   Wed, 12 Jun 2024 21:39:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.198.154
	  Hostname:    multinode-025000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b73fb83b96f240f1a8c8901d7e999eaf
	  System UUID:                3e5a42d3-ea80-0c4d-ad18-4b76e4f3e22f
	  Boot ID:                    cb166f5a-c089-4209-aabc-6ed10eaa3bfa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-45qqd                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         49s
	  kube-system                 coredns-7db6d8ff4d-vgcxw                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m28s
	  kube-system                 etcd-multinode-025000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m44s
	  kube-system                 kindnet-bqlg8                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m28s
	  kube-system                 kube-apiserver-multinode-025000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	  kube-system                 kube-controller-manager-multinode-025000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 kube-proxy-47lr8                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 kube-scheduler-multinode-025000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m25s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m49s (x8 over 4m49s)  kubelet          Node multinode-025000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m49s (x8 over 4m49s)  kubelet          Node multinode-025000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m49s (x7 over 4m49s)  kubelet          Node multinode-025000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m42s                  kubelet          Node multinode-025000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m42s                  kubelet          Node multinode-025000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m42s                  kubelet          Node multinode-025000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m29s                  node-controller  Node multinode-025000 event: Registered Node multinode-025000 in Controller
	  Normal  NodeReady                4m17s                  kubelet          Node multinode-025000 status is now: NodeReady
	
	
	Name:               multinode-025000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-025000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97
	                    minikube.k8s.io/name=multinode-025000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_12T14_42_39_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 21:42:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-025000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 21:44:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 21:43:41 +0000   Wed, 12 Jun 2024 21:42:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 21:43:41 +0000   Wed, 12 Jun 2024 21:42:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 21:43:41 +0000   Wed, 12 Jun 2024 21:42:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 21:43:41 +0000   Wed, 12 Jun 2024 21:42:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.196.105
	  Hostname:    multinode-025000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 c11d7ff5518449f8bc8169a1fd7b0c4b
	  System UUID:                3b021c48-8479-f34c-83c2-77b944a77c5e
	  Boot ID:                    67e77c09-c6b2-4c01-b167-2481dd4a7a96
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9bsls    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         49s
	  kube-system                 kindnet-v4cqk              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      94s
	  kube-system                 kube-proxy-tdcdp           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 82s                kube-proxy       
	  Normal  RegisteredNode           94s                node-controller  Node multinode-025000-m02 event: Registered Node multinode-025000-m02 in Controller
	  Normal  NodeHasSufficientMemory  94s (x2 over 94s)  kubelet          Node multinode-025000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    94s (x2 over 94s)  kubelet          Node multinode-025000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     94s (x2 over 94s)  kubelet          Node multinode-025000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  94s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                75s                kubelet          Node multinode-025000-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun12 21:38] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.162481] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[ +30.042427] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.092125] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.514896] systemd-fstab-generator[990]: Ignoring "noauto" option for root device
	[  +0.197018] systemd-fstab-generator[1002]: Ignoring "noauto" option for root device
	[  +0.196143] systemd-fstab-generator[1016]: Ignoring "noauto" option for root device
	[  +2.759118] systemd-fstab-generator[1186]: Ignoring "noauto" option for root device
	[  +0.182609] systemd-fstab-generator[1198]: Ignoring "noauto" option for root device
	[  +0.188084] systemd-fstab-generator[1210]: Ignoring "noauto" option for root device
	[Jun12 21:39] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	[ +11.972264] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +0.107973] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.233810] systemd-fstab-generator[1519]: Ignoring "noauto" option for root device
	[  +6.870138] systemd-fstab-generator[1720]: Ignoring "noauto" option for root device
	[  +0.100226] kauditd_printk_skb: 73 callbacks suppressed
	[  +7.539419] systemd-fstab-generator[2125]: Ignoring "noauto" option for root device
	[  +0.126030] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.425853] systemd-fstab-generator[2313]: Ignoring "noauto" option for root device
	[  +0.226946] kauditd_printk_skb: 12 callbacks suppressed
	[  +2.423272] hrtimer: interrupt took 1449816 ns
	[  +4.553625] kauditd_printk_skb: 51 callbacks suppressed
	[  +5.222824] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [2455f315465b] <==
	{"level":"info","ts":"2024-06-12T21:39:26.793661Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-12T21:39:26.796107Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-12T21:39:26.798333Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.23.198.154:2379"}
	{"level":"info","ts":"2024-06-12T21:39:26.798745Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a7fa2563dcb4b7b8","local-member-id":"b93ef5bd064a9684","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-12T21:39:26.798984Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-12T21:39:26.799169Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-12T21:39:26.798783Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-12T21:39:26.799395Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-06-12T21:39:51.878197Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.730409ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-025000\" ","response":"range_response_count:1 size:4487"}
	{"level":"info","ts":"2024-06-12T21:39:51.878887Z","caller":"traceutil/trace.go:171","msg":"trace[1581266893] range","detail":"{range_begin:/registry/minions/multinode-025000; range_end:; response_count:1; response_revision:419; }","duration":"191.462714ms","start":"2024-06-12T21:39:51.687414Z","end":"2024-06-12T21:39:51.878877Z","steps":["trace[1581266893] 'agreement among raft nodes before linearized reading'  (duration: 190.650608ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-12T21:39:51.878798Z","caller":"traceutil/trace.go:171","msg":"trace[678553806] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"323.397389ms","start":"2024-06-12T21:39:51.555332Z","end":"2024-06-12T21:39:51.87873Z","steps":["trace[678553806] 'process raft request'  (duration: 271.314804ms)","trace[678553806] 'compare'  (duration: 51.313479ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-12T21:39:51.87955Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-12T21:39:51.555312Z","time spent":"323.868092ms","remote":"127.0.0.1:44008","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":553,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/multinode-025000\" mod_revision:315 > success:<request_put:<key:\"/registry/leases/kube-node-lease/multinode-025000\" value_size:496 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/multinode-025000\" > >"}
	{"level":"info","ts":"2024-06-12T21:39:51.878847Z","caller":"traceutil/trace.go:171","msg":"trace[1639058801] linearizableReadLoop","detail":"{readStateIndex:433; appliedIndex:432; }","duration":"190.613108ms","start":"2024-06-12T21:39:51.687444Z","end":"2024-06-12T21:39:51.878057Z","steps":["trace[1639058801] 'read index received'  (duration: 139.210828ms)","trace[1639058801] 'applied index is now lower than readState.Index'  (duration: 51.40158ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-12T21:42:49.725297Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"408.582737ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-06-12T21:42:49.725637Z","caller":"traceutil/trace.go:171","msg":"trace[848319610] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:629; }","duration":"408.953432ms","start":"2024-06-12T21:42:49.316664Z","end":"2024-06-12T21:42:49.725618Z","steps":["trace[848319610] 'range keys from in-memory index tree'  (duration: 408.331041ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-12T21:42:49.72585Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-12T21:42:49.316627Z","time spent":"409.211328ms","remote":"127.0.0.1:43906","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1140,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-06-12T21:42:49.726071Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.448799ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10845952194099954477 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/multinode-025000-m02\" mod_revision:613 > success:<request_put:<key:\"/registry/leases/kube-node-lease/multinode-025000-m02\" value_size:508 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/multinode-025000-m02\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-12T21:42:49.726418Z","caller":"traceutil/trace.go:171","msg":"trace[367156177] linearizableReadLoop","detail":"{readStateIndex:681; appliedIndex:680; }","duration":"173.933975ms","start":"2024-06-12T21:42:49.552473Z","end":"2024-06-12T21:42:49.726407Z","steps":["trace[367156177] 'read index received'  (duration: 30.3µs)","trace[367156177] 'applied index is now lower than readState.Index'  (duration: 173.902775ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-12T21:42:49.727033Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"174.554766ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-025000-m02\" ","response":"range_response_count:1 size:2848"}
	{"level":"info","ts":"2024-06-12T21:42:49.727084Z","caller":"traceutil/trace.go:171","msg":"trace[1847669297] range","detail":"{range_begin:/registry/minions/multinode-025000-m02; range_end:; response_count:1; response_revision:630; }","duration":"174.606466ms","start":"2024-06-12T21:42:49.552468Z","end":"2024-06-12T21:42:49.727074Z","steps":["trace[1847669297] 'agreement among raft nodes before linearized reading'  (duration: 174.164072ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-12T21:42:49.727368Z","caller":"traceutil/trace.go:171","msg":"trace[1223416252] transaction","detail":"{read_only:false; response_revision:630; number_of_response:1; }","duration":"391.275868ms","start":"2024-06-12T21:42:49.336081Z","end":"2024-06-12T21:42:49.727357Z","steps":["trace[1223416252] 'process raft request'  (duration: 180.17889ms)","trace[1223416252] 'compare'  (duration: 209.347701ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-12T21:42:49.727448Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-12T21:42:49.336064Z","time spent":"391.354767ms","remote":"127.0.0.1:44008","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":569,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/multinode-025000-m02\" mod_revision:613 > success:<request_put:<key:\"/registry/leases/kube-node-lease/multinode-025000-m02\" value_size:508 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/multinode-025000-m02\" > >"}
	{"level":"info","ts":"2024-06-12T21:42:49.81767Z","caller":"traceutil/trace.go:171","msg":"trace[1383367443] transaction","detail":"{read_only:false; response_revision:631; number_of_response:1; }","duration":"259.128536ms","start":"2024-06-12T21:42:49.558523Z","end":"2024-06-12T21:42:49.817652Z","steps":["trace[1383367443] 'process raft request'  (duration: 259.028037ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-12T21:42:49.876404Z","caller":"traceutil/trace.go:171","msg":"trace[32266009] transaction","detail":"{read_only:false; response_revision:632; number_of_response:1; }","duration":"140.329423ms","start":"2024-06-12T21:42:49.736056Z","end":"2024-06-12T21:42:49.876386Z","steps":["trace[32266009] 'process raft request'  (duration: 91.441577ms)","trace[32266009] 'compare'  (duration: 48.812547ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-12T21:43:25.14728Z","caller":"traceutil/trace.go:171","msg":"trace[1018541647] transaction","detail":"{read_only:false; response_revision:697; number_of_response:1; }","duration":"248.456182ms","start":"2024-06-12T21:43:24.898805Z","end":"2024-06-12T21:43:25.147261Z","steps":["trace[1018541647] 'process raft request'  (duration: 248.245985ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:44:13 up 6 min,  0 users,  load average: 0.16, 0.23, 0.11
	Linux multinode-025000 5.10.207 #1 SMP Tue Jun 11 00:16:05 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4d60d82f6bc5] <==
	I0612 21:43:03.566877       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 21:43:13.591377       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 21:43:13.591475       1 main.go:227] handling current node
	I0612 21:43:13.591491       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 21:43:13.591497       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 21:43:23.598207       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 21:43:23.598316       1 main.go:227] handling current node
	I0612 21:43:23.598331       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 21:43:23.598338       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 21:43:33.611504       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 21:43:33.611641       1 main.go:227] handling current node
	I0612 21:43:33.611694       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 21:43:33.611715       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 21:43:43.618578       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 21:43:43.618737       1 main.go:227] handling current node
	I0612 21:43:43.620261       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 21:43:43.620341       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 21:43:53.634304       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 21:43:53.634408       1 main.go:227] handling current node
	I0612 21:43:53.634424       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 21:43:53.634430       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 21:44:03.641288       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 21:44:03.641341       1 main.go:227] handling current node
	I0612 21:44:03.641355       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 21:44:03.641361       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [0749f44d0356] <==
	I0612 21:39:29.231495       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0612 21:39:29.241646       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0612 21:39:29.241739       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0612 21:39:30.264705       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0612 21:39:30.347908       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0612 21:39:30.462574       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0612 21:39:30.500370       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.23.198.154]
	I0612 21:39:30.503168       1 controller.go:615] quota admission added evaluator for: endpoints
	I0612 21:39:30.511673       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0612 21:39:31.255467       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0612 21:39:31.287575       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0612 21:39:31.335702       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0612 21:39:31.372374       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0612 21:39:45.330129       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0612 21:39:45.500544       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0612 21:43:29.703266       1 conn.go:339] Error on socket receive: read tcp 172.23.198.154:8443->172.23.192.1:60434: use of closed network connection
	E0612 21:43:30.124586       1 conn.go:339] Error on socket receive: read tcp 172.23.198.154:8443->172.23.192.1:60436: use of closed network connection
	E0612 21:43:30.696508       1 conn.go:339] Error on socket receive: read tcp 172.23.198.154:8443->172.23.192.1:60438: use of closed network connection
	E0612 21:43:31.114893       1 conn.go:339] Error on socket receive: read tcp 172.23.198.154:8443->172.23.192.1:60440: use of closed network connection
	E0612 21:43:31.561168       1 conn.go:339] Error on socket receive: read tcp 172.23.198.154:8443->172.23.192.1:60443: use of closed network connection
	E0612 21:43:31.984532       1 conn.go:339] Error on socket receive: read tcp 172.23.198.154:8443->172.23.192.1:60445: use of closed network connection
	E0612 21:43:32.753031       1 conn.go:339] Error on socket receive: read tcp 172.23.198.154:8443->172.23.192.1:60448: use of closed network connection
	E0612 21:43:43.163281       1 conn.go:339] Error on socket receive: read tcp 172.23.198.154:8443->172.23.192.1:60450: use of closed network connection
	E0612 21:43:43.573131       1 conn.go:339] Error on socket receive: read tcp 172.23.198.154:8443->172.23.192.1:60452: use of closed network connection
	E0612 21:43:53.979417       1 conn.go:339] Error on socket receive: read tcp 172.23.198.154:8443->172.23.192.1:60454: use of closed network connection
	
	
	==> kube-controller-manager [685d167da53c] <==
	I0612 21:39:45.868449       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.90937ms"
	I0612 21:39:45.868845       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="122.402µs"
	I0612 21:39:45.869382       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="206.903µs"
	I0612 21:39:45.905402       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="386.807µs"
	I0612 21:39:46.349409       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="105.452815ms"
	I0612 21:39:46.386321       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.301621ms"
	I0612 21:39:46.386974       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="616.309µs"
	I0612 21:39:56.441072       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="366.601µs"
	I0612 21:39:56.465727       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.4µs"
	I0612 21:39:57.870560       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="68.5µs"
	I0612 21:39:58.874445       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.448319ms"
	I0612 21:39:58.875168       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="103.901µs"
	I0612 21:39:59.529553       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0612 21:42:39.169243       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m02\" does not exist"
	I0612 21:42:39.188142       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025000-m02" podCIDRs=["10.244.1.0/24"]
	I0612 21:42:39.563565       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000-m02"
	I0612 21:42:58.063730       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 21:43:24.138579       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.052538ms"
	I0612 21:43:24.156190       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.434267ms"
	I0612 21:43:24.156677       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.099µs"
	I0612 21:43:24.183391       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.299µs"
	I0612 21:43:26.908415       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.051448ms"
	I0612 21:43:26.908853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34µs"
	I0612 21:43:27.296932       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.474956ms"
	I0612 21:43:27.304566       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.488944ms"
	
	
	==> kube-proxy [c4842faba751] <==
	I0612 21:39:47.407607       1 server_linux.go:69] "Using iptables proxy"
	I0612 21:39:47.423801       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.198.154"]
	I0612 21:39:47.480061       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 21:39:47.480182       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 21:39:47.480205       1 server_linux.go:165] "Using iptables Proxier"
	I0612 21:39:47.484521       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 21:39:47.485171       1 server.go:872] "Version info" version="v1.30.1"
	I0612 21:39:47.485535       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 21:39:47.488126       1 config.go:192] "Starting service config controller"
	I0612 21:39:47.488162       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 21:39:47.488188       1 config.go:101] "Starting endpoint slice config controller"
	I0612 21:39:47.488197       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 21:39:47.488969       1 config.go:319] "Starting node config controller"
	I0612 21:39:47.489001       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 21:39:47.588500       1 shared_informer.go:320] Caches are synced for service config
	I0612 21:39:47.588641       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0612 21:39:47.589226       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6b021c195669] <==
	W0612 21:39:29.271696       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0612 21:39:29.271839       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0612 21:39:29.275489       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0612 21:39:29.275551       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0612 21:39:29.296739       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0612 21:39:29.297145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0612 21:39:29.433593       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0612 21:39:29.433887       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0612 21:39:29.471880       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0612 21:39:29.471994       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0612 21:39:29.482669       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0612 21:39:29.483008       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0612 21:39:29.569402       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0612 21:39:29.571433       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0612 21:39:29.677906       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0612 21:39:29.677950       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0612 21:39:29.687951       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0612 21:39:29.688054       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0612 21:39:29.780288       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0612 21:39:29.780411       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0612 21:39:29.832564       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0612 21:39:29.832892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0612 21:39:29.889591       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0612 21:39:29.889868       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 21:39:32.513980       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 12 21:39:57 multinode-025000 kubelet[2132]: I0612 21:39:57.862861    2132 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=4.862837873 podStartE2EDuration="4.862837873s" podCreationTimestamp="2024-06-12 21:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-12 21:39:57.833342955 +0000 UTC m=+26.726910509" watchObservedRunningTime="2024-06-12 21:39:57.862837873 +0000 UTC m=+26.756405327"
	Jun 12 21:39:57 multinode-025000 kubelet[2132]: I0612 21:39:57.864053    2132 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podStartSLOduration=12.86404347 podStartE2EDuration="12.86404347s" podCreationTimestamp="2024-06-12 21:39:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-12 21:39:57.863843671 +0000 UTC m=+26.757411225" watchObservedRunningTime="2024-06-12 21:39:57.86404347 +0000 UTC m=+26.757610924"
	Jun 12 21:40:31 multinode-025000 kubelet[2132]: E0612 21:40:31.394056    2132 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 21:40:31 multinode-025000 kubelet[2132]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 21:40:31 multinode-025000 kubelet[2132]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 21:40:31 multinode-025000 kubelet[2132]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 21:40:31 multinode-025000 kubelet[2132]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 21:41:31 multinode-025000 kubelet[2132]: E0612 21:41:31.393604    2132 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 21:41:31 multinode-025000 kubelet[2132]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 21:41:31 multinode-025000 kubelet[2132]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 21:41:31 multinode-025000 kubelet[2132]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 21:41:31 multinode-025000 kubelet[2132]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 21:42:31 multinode-025000 kubelet[2132]: E0612 21:42:31.397814    2132 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 21:42:31 multinode-025000 kubelet[2132]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 21:42:31 multinode-025000 kubelet[2132]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 21:42:31 multinode-025000 kubelet[2132]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 21:42:31 multinode-025000 kubelet[2132]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 21:43:24 multinode-025000 kubelet[2132]: I0612 21:43:24.136796    2132 topology_manager.go:215] "Topology Admit Handler" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4" podNamespace="default" podName="busybox-fc5497c4f-45qqd"
	Jun 12 21:43:24 multinode-025000 kubelet[2132]: I0612 21:43:24.253350    2132 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w7zn\" (UniqueName: \"kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn\") pod \"busybox-fc5497c4f-45qqd\" (UID: \"8736e2b2-a744-4092-ac73-c59700fda8a4\") " pod="default/busybox-fc5497c4f-45qqd"
	Jun 12 21:43:25 multinode-025000 kubelet[2132]: I0612 21:43:25.221335    2132 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84a9b747663ca262bb35bb462ba83da0c104aee08928bd92a44297ee225d4c27"
	Jun 12 21:43:31 multinode-025000 kubelet[2132]: E0612 21:43:31.394294    2132 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 21:43:31 multinode-025000 kubelet[2132]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 21:43:31 multinode-025000 kubelet[2132]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 21:43:31 multinode-025000 kubelet[2132]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 21:43:31 multinode-025000 kubelet[2132]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0612 14:44:05.751803   11300 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-025000 -n multinode-025000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-025000 -n multinode-025000: (11.7859966s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-025000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (55.26s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (517.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-025000
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-025000
E0612 14:59:17.160578    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
E0612 14:59:51.913275    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-025000: (1m38.8183021s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-025000 --wait=true -v=8 --alsologtostderr
E0612 15:01:13.939109    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
E0612 15:04:51.921317    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
E0612 15:06:13.929089    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
E0612 15:06:15.174802    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-025000 --wait=true -v=8 --alsologtostderr: exit status 1 (6m8.3713147s)

                                                
                                                
-- stdout --
	* [multinode-025000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19044
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "multinode-025000" primary control-plane node in "multinode-025000" cluster
	* Restarting existing hyperv VM for "multinode-025000" ...
	* Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-025000-m02" worker node in "multinode-025000" cluster
	* Restarting existing hyperv VM for "multinode-025000-m02" ...
	* Found network options:
	  - NO_PROXY=172.23.200.184
	  - NO_PROXY=172.23.200.184
	* Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	  - env NO_PROXY=172.23.200.184
	* Verifying Kubernetes components...
	
	* Starting "multinode-025000-m03" worker node in "multinode-025000" cluster
	* Restarting existing hyperv VM for "multinode-025000-m03" ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0612 15:00:23.023483   13752 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0612 15:00:23.024570   13752 out.go:291] Setting OutFile to fd 1068 ...
	I0612 15:00:23.025445   13752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 15:00:23.025445   13752 out.go:304] Setting ErrFile to fd 1628...
	I0612 15:00:23.025445   13752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 15:00:23.051240   13752 out.go:298] Setting JSON to false
	I0612 15:00:23.055591   13752 start.go:129] hostinfo: {"hostname":"minikube1","uptime":27975,"bootTime":1718201647,"procs":200,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4529 Build 19045.4529","kernelVersion":"10.0.19045.4529 Build 19045.4529","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0612 15:00:23.055591   13752 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0612 15:00:23.136005   13752 out.go:177] * [multinode-025000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	I0612 15:00:23.144243   13752 notify.go:220] Checking for updates...
	I0612 15:00:23.180967   13752 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 15:00:23.194523   13752 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 15:00:23.232736   13752 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0612 15:00:23.241902   13752 out.go:177]   - MINIKUBE_LOCATION=19044
	I0612 15:00:23.280655   13752 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 15:00:23.376454   13752 config.go:182] Loaded profile config "multinode-025000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 15:00:23.376454   13752 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 15:00:28.888107   13752 out.go:177] * Using the hyperv driver based on existing profile
	I0612 15:00:28.939003   13752 start.go:297] selected driver: hyperv
	I0612 15:00:28.939977   13752 start.go:901] validating driver "hyperv" against &{Name:multinode-025000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.30.1 ClusterName:multinode-025000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.198.154 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.196.105 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.23.206.72 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 15:00:28.940472   13752 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 15:00:28.993223   13752 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 15:00:28.993326   13752 cni.go:84] Creating CNI manager for ""
	I0612 15:00:28.993326   13752 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0612 15:00:28.993515   13752 start.go:340] cluster config:
	{Name:multinode-025000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-025000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.198.154 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.196.105 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.23.206.72 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provision
er:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 15:00:28.993966   13752 iso.go:125] acquiring lock: {Name:mk052eb609047b80b971cea5054470b0706b5b41 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 15:00:29.075819   13752 out.go:177] * Starting "multinode-025000" primary control-plane node in "multinode-025000" cluster
	I0612 15:00:29.085745   13752 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0612 15:00:29.085745   13752 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0612 15:00:29.085745   13752 cache.go:56] Caching tarball of preloaded images
	I0612 15:00:29.086702   13752 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0612 15:00:29.086702   13752 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0612 15:00:29.086702   13752 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\config.json ...
	I0612 15:00:29.089982   13752 start.go:360] acquireMachinesLock for multinode-025000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 15:00:29.090206   13752 start.go:364] duration metric: took 113.1µs to acquireMachinesLock for "multinode-025000"
	I0612 15:00:29.090382   13752 start.go:96] Skipping create...Using existing machine configuration
	I0612 15:00:29.090382   13752 fix.go:54] fixHost starting: 
	I0612 15:00:29.090911   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:00:31.876279   13752 main.go:141] libmachine: [stdout =====>] : Off
	
	I0612 15:00:31.876676   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:00:31.876676   13752 fix.go:112] recreateIfNeeded on multinode-025000: state=Stopped err=<nil>
	W0612 15:00:31.876676   13752 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 15:00:31.899886   13752 out.go:177] * Restarting existing hyperv VM for "multinode-025000" ...
	I0612 15:00:31.920140   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-025000
	I0612 15:00:34.982854   13752 main.go:141] libmachine: [stdout =====>] : 
	I0612 15:00:34.982854   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:00:34.982854   13752 main.go:141] libmachine: Waiting for host to start...
	I0612 15:00:34.982854   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:00:37.224031   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:00:37.224147   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:00:37.224147   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:00:39.720049   13752 main.go:141] libmachine: [stdout =====>] : 
	I0612 15:00:39.720049   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:00:40.722981   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:00:42.914786   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:00:42.915043   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:00:42.915043   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:00:45.520993   13752 main.go:141] libmachine: [stdout =====>] : 
	I0612 15:00:45.521215   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:00:46.528435   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:00:48.777859   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:00:48.778063   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:00:48.778106   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:00:51.337551   13752 main.go:141] libmachine: [stdout =====>] : 
	I0612 15:00:51.337551   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:00:52.343181   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:00:54.597726   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:00:54.597726   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:00:54.597906   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:00:57.129606   13752 main.go:141] libmachine: [stdout =====>] : 
	I0612 15:00:57.129606   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:00:58.129819   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:01:00.392349   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:01:00.392349   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:00.392645   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:01:03.007334   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:01:03.007334   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:03.010991   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:01:05.196721   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:01:05.196721   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:05.197433   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:01:07.796762   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:01:07.796762   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:07.798026   13752 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\config.json ...
	I0612 15:01:07.800838   13752 machine.go:94] provisionDockerMachine start ...
	I0612 15:01:07.800923   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:01:09.972772   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:01:09.972772   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:09.972772   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:01:12.493479   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:01:12.493479   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:12.512050   13752 main.go:141] libmachine: Using SSH client type: native
	I0612 15:01:12.513615   13752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.200.184 22 <nil> <nil>}
	I0612 15:01:12.513615   13752 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 15:01:12.644231   13752 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 15:01:12.644377   13752 buildroot.go:166] provisioning hostname "multinode-025000"
	I0612 15:01:12.644497   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:01:14.791166   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:01:14.791166   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:14.802459   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:01:17.325104   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:01:17.325104   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:17.342028   13752 main.go:141] libmachine: Using SSH client type: native
	I0612 15:01:17.342727   13752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.200.184 22 <nil> <nil>}
	I0612 15:01:17.342727   13752 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-025000 && echo "multinode-025000" | sudo tee /etc/hostname
	I0612 15:01:17.496769   13752 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-025000
	
	I0612 15:01:17.496769   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:01:19.612891   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:01:19.625233   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:19.625468   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:01:22.136802   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:01:22.136802   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:22.156209   13752 main.go:141] libmachine: Using SSH client type: native
	I0612 15:01:22.156209   13752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.200.184 22 <nil> <nil>}
	I0612 15:01:22.156853   13752 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-025000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-025000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-025000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 15:01:22.304434   13752 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 15:01:22.304582   13752 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0612 15:01:22.304682   13752 buildroot.go:174] setting up certificates
	I0612 15:01:22.304758   13752 provision.go:84] configureAuth start
	I0612 15:01:22.304929   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:01:24.475460   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:01:24.475460   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:24.475721   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:01:27.022605   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:01:27.022605   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:27.034798   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:01:29.196649   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:01:29.196649   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:29.196649   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:01:31.706410   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:01:31.706410   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:31.706410   13752 provision.go:143] copyHostCerts
	I0612 15:01:31.718445   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0612 15:01:31.718445   13752 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0612 15:01:31.718445   13752 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0612 15:01:31.719342   13752 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0612 15:01:31.720485   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0612 15:01:31.720717   13752 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0612 15:01:31.720717   13752 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0612 15:01:31.720717   13752 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0612 15:01:31.722036   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0612 15:01:31.722251   13752 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0612 15:01:31.722251   13752 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0612 15:01:31.722644   13752 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0612 15:01:31.723884   13752 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-025000 san=[127.0.0.1 172.23.200.184 localhost minikube multinode-025000]
	I0612 15:01:31.968051   13752 provision.go:177] copyRemoteCerts
	I0612 15:01:31.978531   13752 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 15:01:31.978531   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:01:34.086511   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:01:34.086511   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:34.097813   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:01:36.512714   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:01:36.512714   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:36.523572   13752 sshutil.go:53] new ssh client: &{IP:172.23.200.184 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000\id_rsa Username:docker}
	I0612 15:01:36.619849   13752 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.641302s)
	I0612 15:01:36.619849   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0612 15:01:36.619849   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 15:01:36.670157   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0612 15:01:36.670739   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0612 15:01:36.715220   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0612 15:01:36.715606   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0612 15:01:36.756743   13752 provision.go:87] duration metric: took 14.4518735s to configureAuth
	I0612 15:01:36.756743   13752 buildroot.go:189] setting minikube options for container-runtime
	I0612 15:01:36.757477   13752 config.go:182] Loaded profile config "multinode-025000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 15:01:36.757477   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:01:38.740322   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:01:38.740322   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:38.752089   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:01:41.137747   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:01:41.137747   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:41.143755   13752 main.go:141] libmachine: Using SSH client type: native
	I0612 15:01:41.144286   13752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.200.184 22 <nil> <nil>}
	I0612 15:01:41.144286   13752 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0612 15:01:41.270398   13752 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0612 15:01:41.270398   13752 buildroot.go:70] root file system type: tmpfs
	I0612 15:01:41.270605   13752 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0612 15:01:41.270759   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:01:43.290625   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:01:43.290625   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:43.301117   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:01:45.720532   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:01:45.731356   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:45.737949   13752 main.go:141] libmachine: Using SSH client type: native
	I0612 15:01:45.738921   13752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.200.184 22 <nil> <nil>}
	I0612 15:01:45.738921   13752 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0612 15:01:45.894484   13752 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0612 15:01:45.894703   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:01:47.921662   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:01:47.921662   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:47.922998   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:01:50.324280   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:01:50.324280   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:50.342355   13752 main.go:141] libmachine: Using SSH client type: native
	I0612 15:01:50.343153   13752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.200.184 22 <nil> <nil>}
	I0612 15:01:50.343153   13752 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0612 15:01:52.774992   13752 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0612 15:01:52.775052   13752 machine.go:97] duration metric: took 44.9740052s to provisionDockerMachine
	I0612 15:01:52.775088   13752 start.go:293] postStartSetup for "multinode-025000" (driver="hyperv")
	I0612 15:01:52.775127   13752 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 15:01:52.787609   13752 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 15:01:52.787609   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:01:54.799297   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:01:54.799297   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:54.799624   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:01:57.202331   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:01:57.202331   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:57.213066   13752 sshutil.go:53] new ssh client: &{IP:172.23.200.184 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000\id_rsa Username:docker}
	I0612 15:01:57.314533   13752 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5119571s)
	I0612 15:01:57.330091   13752 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 15:01:57.336815   13752 command_runner.go:130] > NAME=Buildroot
	I0612 15:01:57.336815   13752 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0612 15:01:57.336815   13752 command_runner.go:130] > ID=buildroot
	I0612 15:01:57.336815   13752 command_runner.go:130] > VERSION_ID=2023.02.9
	I0612 15:01:57.336815   13752 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0612 15:01:57.336924   13752 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 15:01:57.337014   13752 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0612 15:01:57.337050   13752 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0612 15:01:57.338266   13752 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> 12802.pem in /etc/ssl/certs
	I0612 15:01:57.338338   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> /etc/ssl/certs/12802.pem
	I0612 15:01:57.351008   13752 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 15:01:57.367855   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem --> /etc/ssl/certs/12802.pem (1708 bytes)
	I0612 15:01:57.410782   13752 start.go:296] duration metric: took 4.6356787s for postStartSetup
	I0612 15:01:57.410973   13752 fix.go:56] duration metric: took 1m28.3202151s for fixHost
	I0612 15:01:57.411094   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:01:59.432296   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:01:59.432296   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:59.432296   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:02:01.799333   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:02:01.809414   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:02:01.814747   13752 main.go:141] libmachine: Using SSH client type: native
	I0612 15:02:01.815504   13752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.200.184 22 <nil> <nil>}
	I0612 15:02:01.815504   13752 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0612 15:02:01.944249   13752 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718229721.947015209
	
	I0612 15:02:01.944249   13752 fix.go:216] guest clock: 1718229721.947015209
	I0612 15:02:01.944421   13752 fix.go:229] Guest: 2024-06-12 15:02:01.947015209 -0700 PDT Remote: 2024-06-12 15:01:57.4109735 -0700 PDT m=+94.474017001 (delta=4.536041709s)
	I0612 15:02:01.944421   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:02:03.903036   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:02:03.903036   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:02:03.915082   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:02:06.269784   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:02:06.269784   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:02:06.286721   13752 main.go:141] libmachine: Using SSH client type: native
	I0612 15:02:06.286898   13752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.200.184 22 <nil> <nil>}
	I0612 15:02:06.286898   13752 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1718229721
	I0612 15:02:06.425776   13752 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jun 12 22:02:01 UTC 2024
	
	I0612 15:02:06.425831   13752 fix.go:236] clock set: Wed Jun 12 22:02:01 UTC 2024
	 (err=<nil>)
	I0612 15:02:06.425831   13752 start.go:83] releasing machines lock for "multinode-025000", held for 1m37.3353038s
	I0612 15:02:06.425890   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:02:08.402828   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:02:08.402828   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:02:08.413902   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:02:10.763921   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:02:10.763921   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:02:10.780104   13752 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 15:02:10.780211   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:02:10.789901   13752 ssh_runner.go:195] Run: cat /version.json
	I0612 15:02:10.789901   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:02:12.871224   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:02:12.872396   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:02:12.871224   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:02:12.873520   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:02:12.874029   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:02:12.874158   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:02:15.442493   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:02:15.453605   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:02:15.453876   13752 sshutil.go:53] new ssh client: &{IP:172.23.200.184 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000\id_rsa Username:docker}
	I0612 15:02:15.474546   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:02:15.474546   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:02:15.474546   13752 sshutil.go:53] new ssh client: &{IP:172.23.200.184 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000\id_rsa Username:docker}
	I0612 15:02:15.537603   13752 command_runner.go:130] > {"iso_version": "v1.33.1-1718047936-19044", "kicbase_version": "v0.0.44-1718016726-19044", "minikube_version": "v1.33.1", "commit": "8a07c05cb41cba41fd6bf6981cdae9c899c82330"}
	I0612 15:02:15.537603   13752 ssh_runner.go:235] Completed: cat /version.json: (4.7476861s)
	I0612 15:02:15.551982   13752 ssh_runner.go:195] Run: systemctl --version
	I0612 15:02:15.612728   13752 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0612 15:02:15.613778   13752 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8325003s)
	I0612 15:02:15.613778   13752 command_runner.go:130] > systemd 252 (252)
	I0612 15:02:15.613857   13752 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0612 15:02:15.626624   13752 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0612 15:02:15.632192   13752 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0612 15:02:15.635709   13752 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 15:02:15.646874   13752 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 15:02:15.675249   13752 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0612 15:02:15.675249   13752 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 15:02:15.675249   13752 start.go:494] detecting cgroup driver to use...
	I0612 15:02:15.675556   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 15:02:15.704025   13752 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0612 15:02:15.717565   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0612 15:02:15.751472   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0612 15:02:15.770467   13752 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0612 15:02:15.783584   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0612 15:02:15.814866   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0612 15:02:15.849186   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0612 15:02:15.882284   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0612 15:02:15.914250   13752 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 15:02:15.945545   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0612 15:02:15.975663   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0612 15:02:16.008244   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0612 15:02:16.038893   13752 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 15:02:16.041397   13752 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0612 15:02:16.067860   13752 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 15:02:16.100254   13752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 15:02:16.277337   13752 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0612 15:02:16.306088   13752 start.go:494] detecting cgroup driver to use...
	I0612 15:02:16.321276   13752 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0612 15:02:16.345005   13752 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0612 15:02:16.345005   13752 command_runner.go:130] > [Unit]
	I0612 15:02:16.345005   13752 command_runner.go:130] > Description=Docker Application Container Engine
	I0612 15:02:16.345111   13752 command_runner.go:130] > Documentation=https://docs.docker.com
	I0612 15:02:16.345111   13752 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0612 15:02:16.345111   13752 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0612 15:02:16.345111   13752 command_runner.go:130] > StartLimitBurst=3
	I0612 15:02:16.345111   13752 command_runner.go:130] > StartLimitIntervalSec=60
	I0612 15:02:16.345111   13752 command_runner.go:130] > [Service]
	I0612 15:02:16.345111   13752 command_runner.go:130] > Type=notify
	I0612 15:02:16.345111   13752 command_runner.go:130] > Restart=on-failure
	I0612 15:02:16.345111   13752 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0612 15:02:16.345111   13752 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0612 15:02:16.345111   13752 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0612 15:02:16.345228   13752 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0612 15:02:16.345228   13752 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0612 15:02:16.345228   13752 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0612 15:02:16.345228   13752 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0612 15:02:16.345228   13752 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0612 15:02:16.345228   13752 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0612 15:02:16.345454   13752 command_runner.go:130] > ExecStart=
	I0612 15:02:16.345454   13752 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0612 15:02:16.345454   13752 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0612 15:02:16.345454   13752 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0612 15:02:16.345454   13752 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0612 15:02:16.345454   13752 command_runner.go:130] > LimitNOFILE=infinity
	I0612 15:02:16.345582   13752 command_runner.go:130] > LimitNPROC=infinity
	I0612 15:02:16.345582   13752 command_runner.go:130] > LimitCORE=infinity
	I0612 15:02:16.345582   13752 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0612 15:02:16.345582   13752 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0612 15:02:16.345582   13752 command_runner.go:130] > TasksMax=infinity
	I0612 15:02:16.345582   13752 command_runner.go:130] > TimeoutStartSec=0
	I0612 15:02:16.345582   13752 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0612 15:02:16.345582   13752 command_runner.go:130] > Delegate=yes
	I0612 15:02:16.345582   13752 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0612 15:02:16.345582   13752 command_runner.go:130] > KillMode=process
	I0612 15:02:16.345582   13752 command_runner.go:130] > [Install]
	I0612 15:02:16.345700   13752 command_runner.go:130] > WantedBy=multi-user.target
	I0612 15:02:16.357632   13752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 15:02:16.388628   13752 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 15:02:16.433269   13752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 15:02:16.468774   13752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0612 15:02:16.502987   13752 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0612 15:02:16.562283   13752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0612 15:02:16.586138   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 15:02:16.616419   13752 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0612 15:02:16.629391   13752 ssh_runner.go:195] Run: which cri-dockerd
	I0612 15:02:16.635116   13752 command_runner.go:130] > /usr/bin/cri-dockerd
	I0612 15:02:16.645833   13752 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0612 15:02:16.664229   13752 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0612 15:02:16.704572   13752 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0612 15:02:16.870352   13752 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0612 15:02:17.038400   13752 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0612 15:02:17.038728   13752 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0612 15:02:17.089182   13752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 15:02:17.266251   13752 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0612 15:02:19.887314   13752 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6210085s)
	I0612 15:02:19.899055   13752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0612 15:02:19.939579   13752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0612 15:02:19.981164   13752 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0612 15:02:20.173450   13752 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0612 15:02:20.348512   13752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 15:02:20.517574   13752 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0612 15:02:20.560540   13752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0612 15:02:20.594984   13752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 15:02:20.770037   13752 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0612 15:02:20.872956   13752 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0612 15:02:20.886221   13752 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0612 15:02:20.895051   13752 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0612 15:02:20.895111   13752 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0612 15:02:20.895187   13752 command_runner.go:130] > Device: 0,22	Inode: 849         Links: 1
	I0612 15:02:20.895187   13752 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0612 15:02:20.895187   13752 command_runner.go:130] > Access: 2024-06-12 22:02:20.800595808 +0000
	I0612 15:02:20.895187   13752 command_runner.go:130] > Modify: 2024-06-12 22:02:20.800595808 +0000
	I0612 15:02:20.895244   13752 command_runner.go:130] > Change: 2024-06-12 22:02:20.803595814 +0000
	I0612 15:02:20.895244   13752 command_runner.go:130] >  Birth: -
	I0612 15:02:20.895244   13752 start.go:562] Will wait 60s for crictl version
	I0612 15:02:20.906649   13752 ssh_runner.go:195] Run: which crictl
	I0612 15:02:20.913520   13752 command_runner.go:130] > /usr/bin/crictl
	I0612 15:02:20.924518   13752 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 15:02:20.974410   13752 command_runner.go:130] > Version:  0.1.0
	I0612 15:02:20.974463   13752 command_runner.go:130] > RuntimeName:  docker
	I0612 15:02:20.974463   13752 command_runner.go:130] > RuntimeVersion:  26.1.4
	I0612 15:02:20.974523   13752 command_runner.go:130] > RuntimeApiVersion:  v1
	I0612 15:02:20.974633   13752 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0612 15:02:20.985231   13752 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0612 15:02:21.014499   13752 command_runner.go:130] > 26.1.4
	I0612 15:02:21.025082   13752 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0612 15:02:21.056249   13752 command_runner.go:130] > 26.1.4
	I0612 15:02:21.062089   13752 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0612 15:02:21.062184   13752 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0612 15:02:21.066424   13752 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0612 15:02:21.066424   13752 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0612 15:02:21.066424   13752 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0612 15:02:21.066424   13752 ip.go:207] Found interface: {Index:16 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:56:a0:18 Flags:up|broadcast|multicast|running}
	I0612 15:02:21.070396   13752 ip.go:210] interface addr: fe80::52c5:dd8:dd1e:a400/64
	I0612 15:02:21.070436   13752 ip.go:210] interface addr: 172.23.192.1/20
	I0612 15:02:21.090525   13752 ssh_runner.go:195] Run: grep 172.23.192.1	host.minikube.internal$ /etc/hosts
	I0612 15:02:21.092788   13752 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 15:02:21.117548   13752 kubeadm.go:877] updating cluster {Name:multinode-025000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.1 ClusterName:multinode-025000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.200.184 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.196.105 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.23.206.72 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 15:02:21.117926   13752 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0612 15:02:21.126879   13752 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0612 15:02:21.149231   13752 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0612 15:02:21.149231   13752 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 15:02:21.149231   13752 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0612 15:02:21.149231   13752 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0612 15:02:21.149231   13752 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0612 15:02:21.149231   13752 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0612 15:02:21.149231   13752 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0612 15:02:21.149231   13752 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0612 15:02:21.149231   13752 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 15:02:21.149231   13752 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0612 15:02:21.150228   13752 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0612 15:02:21.150228   13752 docker.go:615] Images already preloaded, skipping extraction
	I0612 15:02:21.159820   13752 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0612 15:02:21.185401   13752 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0612 15:02:21.185401   13752 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0612 15:02:21.185401   13752 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 15:02:21.185401   13752 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0612 15:02:21.185401   13752 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0612 15:02:21.185401   13752 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0612 15:02:21.185401   13752 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0612 15:02:21.185401   13752 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0612 15:02:21.185401   13752 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 15:02:21.185401   13752 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0612 15:02:21.185401   13752 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0612 15:02:21.185401   13752 cache_images.go:84] Images are preloaded, skipping loading
	I0612 15:02:21.185401   13752 kubeadm.go:928] updating node { 172.23.200.184 8443 v1.30.1 docker true true} ...
	I0612 15:02:21.185401   13752 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-025000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.200.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-025000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 15:02:21.195657   13752 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0612 15:02:21.219723   13752 command_runner.go:130] > cgroupfs
	I0612 15:02:21.227104   13752 cni.go:84] Creating CNI manager for ""
	I0612 15:02:21.227176   13752 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0612 15:02:21.227255   13752 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 15:02:21.227255   13752 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.23.200.184 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-025000 NodeName:multinode-025000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.23.200.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.23.200.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 15:02:21.227255   13752 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.23.200.184
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-025000"
	  kubeletExtraArgs:
	    node-ip: 172.23.200.184
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.23.200.184"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 15:02:21.238572   13752 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 15:02:21.258967   13752 command_runner.go:130] > kubeadm
	I0612 15:02:21.259090   13752 command_runner.go:130] > kubectl
	I0612 15:02:21.259090   13752 command_runner.go:130] > kubelet
	I0612 15:02:21.259090   13752 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 15:02:21.269264   13752 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 15:02:21.290390   13752 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0612 15:02:21.319770   13752 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 15:02:21.348775   13752 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0612 15:02:21.388388   13752 ssh_runner.go:195] Run: grep 172.23.200.184	control-plane.minikube.internal$ /etc/hosts
	I0612 15:02:21.392093   13752 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.200.184	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 15:02:21.424361   13752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 15:02:21.598889   13752 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 15:02:21.628001   13752 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000 for IP: 172.23.200.184
	I0612 15:02:21.628001   13752 certs.go:194] generating shared ca certs ...
	I0612 15:02:21.628160   13752 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 15:02:21.628878   13752 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0612 15:02:21.629121   13752 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0612 15:02:21.629121   13752 certs.go:256] generating profile certs ...
	I0612 15:02:21.630240   13752 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\client.key
	I0612 15:02:21.630240   13752 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.key.dac33de1
	I0612 15:02:21.630240   13752 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.crt.dac33de1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.23.200.184]
	I0612 15:02:21.786227   13752 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.crt.dac33de1 ...
	I0612 15:02:21.786227   13752 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.crt.dac33de1: {Name:mk0970a1a7df551c6e9312560c14ab64a80c5ab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 15:02:21.793525   13752 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.key.dac33de1 ...
	I0612 15:02:21.793525   13752 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.key.dac33de1: {Name:mk4749182fd801b252e332471089f28320779661 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 15:02:21.795038   13752 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.crt.dac33de1 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.crt
	I0612 15:02:21.807459   13752 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.key.dac33de1 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.key
	I0612 15:02:21.808767   13752 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\proxy-client.key
	I0612 15:02:21.808767   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0612 15:02:21.810003   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0612 15:02:21.810003   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0612 15:02:21.810359   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0612 15:02:21.810359   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0612 15:02:21.810359   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0612 15:02:21.810359   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0612 15:02:21.811045   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0612 15:02:21.811292   13752 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem (1338 bytes)
	W0612 15:02:21.812110   13752 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280_empty.pem, impossibly tiny 0 bytes
	I0612 15:02:21.812305   13752 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0612 15:02:21.812578   13752 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0612 15:02:21.813151   13752 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0612 15:02:21.813456   13752 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0612 15:02:21.813905   13752 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem (1708 bytes)
	I0612 15:02:21.813905   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem -> /usr/share/ca-certificates/1280.pem
	I0612 15:02:21.814578   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> /usr/share/ca-certificates/12802.pem
	I0612 15:02:21.814880   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0612 15:02:21.815135   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 15:02:21.862681   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 15:02:21.910350   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 15:02:21.961376   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0612 15:02:22.001691   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0612 15:02:22.052317   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0612 15:02:22.094125   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 15:02:22.148089   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 15:02:22.194034   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem --> /usr/share/ca-certificates/1280.pem (1338 bytes)
	I0612 15:02:22.233292   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem --> /usr/share/ca-certificates/12802.pem (1708 bytes)
	I0612 15:02:22.289534   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 15:02:22.334222   13752 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 15:02:22.377356   13752 ssh_runner.go:195] Run: openssl version
	I0612 15:02:22.385877   13752 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0612 15:02:22.398163   13752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1280.pem && ln -fs /usr/share/ca-certificates/1280.pem /etc/ssl/certs/1280.pem"
	I0612 15:02:22.433853   13752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1280.pem
	I0612 15:02:22.441126   13752 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 12 20:15 /usr/share/ca-certificates/1280.pem
	I0612 15:02:22.441264   13752 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:15 /usr/share/ca-certificates/1280.pem
	I0612 15:02:22.451480   13752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1280.pem
	I0612 15:02:22.455048   13752 command_runner.go:130] > 51391683
	I0612 15:02:22.471286   13752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1280.pem /etc/ssl/certs/51391683.0"
	I0612 15:02:22.500977   13752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12802.pem && ln -fs /usr/share/ca-certificates/12802.pem /etc/ssl/certs/12802.pem"
	I0612 15:02:22.530484   13752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12802.pem
	I0612 15:02:22.539178   13752 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 12 20:15 /usr/share/ca-certificates/12802.pem
	I0612 15:02:22.539417   13752 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:15 /usr/share/ca-certificates/12802.pem
	I0612 15:02:22.550319   13752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12802.pem
	I0612 15:02:22.558360   13752 command_runner.go:130] > 3ec20f2e
	I0612 15:02:22.569385   13752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12802.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 15:02:22.599508   13752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 15:02:22.628984   13752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 15:02:22.636280   13752 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 12 20:00 /usr/share/ca-certificates/minikubeCA.pem
	I0612 15:02:22.636280   13752 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:00 /usr/share/ca-certificates/minikubeCA.pem
	I0612 15:02:22.646032   13752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 15:02:22.648698   13752 command_runner.go:130] > b5213941
	I0612 15:02:22.665790   13752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 15:02:22.696515   13752 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 15:02:22.705902   13752 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 15:02:22.705980   13752 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0612 15:02:22.706026   13752 command_runner.go:130] > Device: 8,1	Inode: 3149138     Links: 1
	I0612 15:02:22.706026   13752 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0612 15:02:22.706026   13752 command_runner.go:130] > Access: 2024-06-12 21:39:19.572401955 +0000
	I0612 15:02:22.706086   13752 command_runner.go:130] > Modify: 2024-06-12 21:39:19.572401955 +0000
	I0612 15:02:22.706086   13752 command_runner.go:130] > Change: 2024-06-12 21:39:19.572401955 +0000
	I0612 15:02:22.706086   13752 command_runner.go:130] >  Birth: 2024-06-12 21:39:19.572401955 +0000
	I0612 15:02:22.719217   13752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0612 15:02:22.728547   13752 command_runner.go:130] > Certificate will not expire
	I0612 15:02:22.740117   13752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0612 15:02:22.751561   13752 command_runner.go:130] > Certificate will not expire
	I0612 15:02:22.763163   13752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0612 15:02:22.766461   13752 command_runner.go:130] > Certificate will not expire
	I0612 15:02:22.787627   13752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0612 15:02:22.797637   13752 command_runner.go:130] > Certificate will not expire
	I0612 15:02:22.811117   13752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0612 15:02:22.819611   13752 command_runner.go:130] > Certificate will not expire
	I0612 15:02:22.830384   13752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0612 15:02:22.840557   13752 command_runner.go:130] > Certificate will not expire
	I0612 15:02:22.840843   13752 kubeadm.go:391] StartCluster: {Name:multinode-025000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:multinode-025000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.200.184 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.196.105 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.23.206.72 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 15:02:22.848634   13752 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0612 15:02:22.882807   13752 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0612 15:02:22.901123   13752 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0612 15:02:22.901123   13752 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0612 15:02:22.901178   13752 command_runner.go:130] > /var/lib/minikube/etcd:
	I0612 15:02:22.901178   13752 command_runner.go:130] > member
	W0612 15:02:22.901233   13752 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0612 15:02:22.901306   13752 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0612 15:02:22.901378   13752 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0612 15:02:22.912427   13752 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0612 15:02:22.930393   13752 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0612 15:02:22.931076   13752 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-025000" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 15:02:22.932207   13752 kubeconfig.go:62] C:\Users\jenkins.minikube1\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-025000" cluster setting kubeconfig missing "multinode-025000" context setting]
	I0612 15:02:22.932969   13752 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 15:02:22.948491   13752 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 15:02:22.949398   13752 kapi.go:59] client config for multinode-025000: &rest.Config{Host:"https://172.23.200.184:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-025000/client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-025000/client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x288e1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0612 15:02:22.951071   13752 cert_rotation.go:137] Starting client certificate rotation controller
	I0612 15:02:22.961610   13752 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0612 15:02:22.981445   13752 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0612 15:02:22.981445   13752 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0612 15:02:22.981445   13752 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0612 15:02:22.981445   13752 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0612 15:02:22.981445   13752 command_runner.go:130] >  kind: InitConfiguration
	I0612 15:02:22.981599   13752 command_runner.go:130] >  localAPIEndpoint:
	I0612 15:02:22.981599   13752 command_runner.go:130] > -  advertiseAddress: 172.23.198.154
	I0612 15:02:22.981599   13752 command_runner.go:130] > +  advertiseAddress: 172.23.200.184
	I0612 15:02:22.981599   13752 command_runner.go:130] >    bindPort: 8443
	I0612 15:02:22.981599   13752 command_runner.go:130] >  bootstrapTokens:
	I0612 15:02:22.981599   13752 command_runner.go:130] >    - groups:
	I0612 15:02:22.981599   13752 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0612 15:02:22.981599   13752 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0612 15:02:22.981599   13752 command_runner.go:130] >    name: "multinode-025000"
	I0612 15:02:22.981755   13752 command_runner.go:130] >    kubeletExtraArgs:
	I0612 15:02:22.981755   13752 command_runner.go:130] > -    node-ip: 172.23.198.154
	I0612 15:02:22.981755   13752 command_runner.go:130] > +    node-ip: 172.23.200.184
	I0612 15:02:22.981755   13752 command_runner.go:130] >    taints: []
	I0612 15:02:22.981812   13752 command_runner.go:130] >  ---
	I0612 15:02:22.981812   13752 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0612 15:02:22.981812   13752 command_runner.go:130] >  kind: ClusterConfiguration
	I0612 15:02:22.981812   13752 command_runner.go:130] >  apiServer:
	I0612 15:02:22.981812   13752 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.23.198.154"]
	I0612 15:02:22.981812   13752 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.23.200.184"]
	I0612 15:02:22.981887   13752 command_runner.go:130] >    extraArgs:
	I0612 15:02:22.981887   13752 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0612 15:02:22.981887   13752 command_runner.go:130] >  controllerManager:
	I0612 15:02:22.981977   13752 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.23.198.154
	+  advertiseAddress: 172.23.200.184
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-025000"
	   kubeletExtraArgs:
	-    node-ip: 172.23.198.154
	+    node-ip: 172.23.200.184
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.23.198.154"]
	+  certSANs: ["127.0.0.1", "localhost", "172.23.200.184"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0612 15:02:22.982063   13752 kubeadm.go:1154] stopping kube-system containers ...
	I0612 15:02:22.990351   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0612 15:02:23.017079   13752 command_runner.go:130] > e83cf4eef49e
	I0612 15:02:23.018014   13752 command_runner.go:130] > 61910369e0d4
	I0612 15:02:23.018014   13752 command_runner.go:130] > 5b9e051df484
	I0612 15:02:23.018014   13752 command_runner.go:130] > 894c58e9fe75
	I0612 15:02:23.018014   13752 command_runner.go:130] > 4d60d82f6bc5
	I0612 15:02:23.018014   13752 command_runner.go:130] > c4842faba751
	I0612 15:02:23.018014   13752 command_runner.go:130] > fad98f611536
	I0612 15:02:23.018014   13752 command_runner.go:130] > 92f2d5f19e95
	I0612 15:02:23.018014   13752 command_runner.go:130] > 6b021c195669
	I0612 15:02:23.018014   13752 command_runner.go:130] > 2455f315465b
	I0612 15:02:23.018014   13752 command_runner.go:130] > 685d167da53c
	I0612 15:02:23.018122   13752 command_runner.go:130] > 0749f44d0356
	I0612 15:02:23.018122   13752 command_runner.go:130] > 2784305b1d5e
	I0612 15:02:23.018122   13752 command_runner.go:130] > 40443305b24f
	I0612 15:02:23.018122   13752 command_runner.go:130] > d9933fdc9ca7
	I0612 15:02:23.018122   13752 command_runner.go:130] > bb4351fab502
	I0612 15:02:23.018122   13752 docker.go:483] Stopping containers: [e83cf4eef49e 61910369e0d4 5b9e051df484 894c58e9fe75 4d60d82f6bc5 c4842faba751 fad98f611536 92f2d5f19e95 6b021c195669 2455f315465b 685d167da53c 0749f44d0356 2784305b1d5e 40443305b24f d9933fdc9ca7 bb4351fab502]
	I0612 15:02:23.027403   13752 ssh_runner.go:195] Run: docker stop e83cf4eef49e 61910369e0d4 5b9e051df484 894c58e9fe75 4d60d82f6bc5 c4842faba751 fad98f611536 92f2d5f19e95 6b021c195669 2455f315465b 685d167da53c 0749f44d0356 2784305b1d5e 40443305b24f d9933fdc9ca7 bb4351fab502
	I0612 15:02:23.056576   13752 command_runner.go:130] > e83cf4eef49e
	I0612 15:02:23.056576   13752 command_runner.go:130] > 61910369e0d4
	I0612 15:02:23.056576   13752 command_runner.go:130] > 5b9e051df484
	I0612 15:02:23.056576   13752 command_runner.go:130] > 894c58e9fe75
	I0612 15:02:23.056576   13752 command_runner.go:130] > 4d60d82f6bc5
	I0612 15:02:23.056576   13752 command_runner.go:130] > c4842faba751
	I0612 15:02:23.056576   13752 command_runner.go:130] > fad98f611536
	I0612 15:02:23.056665   13752 command_runner.go:130] > 92f2d5f19e95
	I0612 15:02:23.056665   13752 command_runner.go:130] > 6b021c195669
	I0612 15:02:23.056665   13752 command_runner.go:130] > 2455f315465b
	I0612 15:02:23.056665   13752 command_runner.go:130] > 685d167da53c
	I0612 15:02:23.056665   13752 command_runner.go:130] > 0749f44d0356
	I0612 15:02:23.056665   13752 command_runner.go:130] > 2784305b1d5e
	I0612 15:02:23.056665   13752 command_runner.go:130] > 40443305b24f
	I0612 15:02:23.056665   13752 command_runner.go:130] > d9933fdc9ca7
	I0612 15:02:23.056665   13752 command_runner.go:130] > bb4351fab502
	I0612 15:02:23.067475   13752 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0612 15:02:23.108441   13752 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 15:02:23.126824   13752 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0612 15:02:23.126824   13752 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0612 15:02:23.127691   13752 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0612 15:02:23.127756   13752 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 15:02:23.128040   13752 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 15:02:23.128102   13752 kubeadm.go:156] found existing configuration files:
	
	I0612 15:02:23.139648   13752 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 15:02:23.142582   13752 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 15:02:23.156364   13752 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 15:02:23.168231   13752 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 15:02:23.196226   13752 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 15:02:23.199394   13752 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 15:02:23.212511   13752 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 15:02:23.223902   13752 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 15:02:23.253475   13752 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 15:02:23.255184   13752 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 15:02:23.270103   13752 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 15:02:23.281449   13752 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 15:02:23.309342   13752 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 15:02:23.319462   13752 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 15:02:23.325131   13752 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 15:02:23.337594   13752 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 15:02:23.366068   13752 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 15:02:23.384106   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 15:02:23.682735   13752 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 15:02:23.684277   13752 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0612 15:02:23.684277   13752 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0612 15:02:23.684277   13752 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0612 15:02:23.684277   13752 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0612 15:02:23.684277   13752 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0612 15:02:23.684277   13752 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0612 15:02:23.684277   13752 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0612 15:02:23.684277   13752 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0612 15:02:23.684277   13752 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0612 15:02:23.684474   13752 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0612 15:02:23.684474   13752 command_runner.go:130] > [certs] Using the existing "sa" key
	I0612 15:02:23.684474   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 15:02:25.071287   13752 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 15:02:25.072170   13752 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 15:02:25.072170   13752 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0612 15:02:25.072170   13752 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 15:02:25.072241   13752 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 15:02:25.072241   13752 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 15:02:25.072241   13752 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.3877624s)
	I0612 15:02:25.072370   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0612 15:02:25.330905   13752 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 15:02:25.330976   13752 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 15:02:25.330976   13752 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0612 15:02:25.331087   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 15:02:25.419961   13752 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 15:02:25.420052   13752 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 15:02:25.420052   13752 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 15:02:25.420119   13752 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 15:02:25.420119   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0612 15:02:25.526305   13752 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 15:02:25.526535   13752 api_server.go:52] waiting for apiserver process to appear ...
	I0612 15:02:25.541441   13752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 15:02:26.054630   13752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 15:02:26.553329   13752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 15:02:27.054764   13752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 15:02:27.539054   13752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 15:02:27.568811   13752 command_runner.go:130] > 1830
	I0612 15:02:27.568941   13752 api_server.go:72] duration metric: took 2.0426292s to wait for apiserver process to appear ...
	I0612 15:02:27.568980   13752 api_server.go:88] waiting for apiserver healthz status ...
	I0612 15:02:27.569016   13752 api_server.go:253] Checking apiserver healthz at https://172.23.200.184:8443/healthz ...
	I0612 15:02:30.955519   13752 api_server.go:279] https://172.23.200.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0612 15:02:30.955519   13752 api_server.go:103] status: https://172.23.200.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0612 15:02:30.956234   13752 api_server.go:253] Checking apiserver healthz at https://172.23.200.184:8443/healthz ...
	I0612 15:02:30.985178   13752 api_server.go:279] https://172.23.200.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0612 15:02:30.986074   13752 api_server.go:103] status: https://172.23.200.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0612 15:02:31.077288   13752 api_server.go:253] Checking apiserver healthz at https://172.23.200.184:8443/healthz ...
	I0612 15:02:31.086447   13752 api_server.go:279] https://172.23.200.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 15:02:31.086491   13752 api_server.go:103] status: https://172.23.200.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 15:02:31.583106   13752 api_server.go:253] Checking apiserver healthz at https://172.23.200.184:8443/healthz ...
	I0612 15:02:31.595406   13752 api_server.go:279] https://172.23.200.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 15:02:31.595491   13752 api_server.go:103] status: https://172.23.200.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 15:02:32.074113   13752 api_server.go:253] Checking apiserver healthz at https://172.23.200.184:8443/healthz ...
	I0612 15:02:32.082132   13752 api_server.go:279] https://172.23.200.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 15:02:32.082237   13752 api_server.go:103] status: https://172.23.200.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 15:02:32.580946   13752 api_server.go:253] Checking apiserver healthz at https://172.23.200.184:8443/healthz ...
	I0612 15:02:32.591357   13752 api_server.go:279] https://172.23.200.184:8443/healthz returned 200:
	ok
	I0612 15:02:32.591357   13752 round_trippers.go:463] GET https://172.23.200.184:8443/version
	I0612 15:02:32.591886   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:32.591886   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:32.591886   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:32.604444   13752 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0612 15:02:32.604444   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:32.604444   13752 round_trippers.go:580]     Content-Length: 263
	I0612 15:02:32.604444   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:32 GMT
	I0612 15:02:32.605361   13752 round_trippers.go:580]     Audit-Id: a9cb0e97-447e-4cdb-98d9-169c85c1c86e
	I0612 15:02:32.605361   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:32.605361   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:32.605361   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:32.605361   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:32.605361   13752 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0612 15:02:32.605361   13752 api_server.go:141] control plane version: v1.30.1
	I0612 15:02:32.605361   13752 api_server.go:131] duration metric: took 5.0363644s to wait for apiserver health ...
	I0612 15:02:32.605361   13752 cni.go:84] Creating CNI manager for ""
	I0612 15:02:32.605361   13752 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0612 15:02:32.608697   13752 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0612 15:02:32.620615   13752 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0612 15:02:32.632215   13752 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0612 15:02:32.632288   13752 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0612 15:02:32.632288   13752 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0612 15:02:32.632288   13752 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0612 15:02:32.632288   13752 command_runner.go:130] > Access: 2024-06-12 22:01:00.846027700 +0000
	I0612 15:02:32.632288   13752 command_runner.go:130] > Modify: 2024-06-11 01:01:29.000000000 +0000
	I0612 15:02:32.632410   13752 command_runner.go:130] > Change: 2024-06-12 15:00:50.948000000 +0000
	I0612 15:02:32.632410   13752 command_runner.go:130] >  Birth: -
	I0612 15:02:32.632535   13752 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0612 15:02:32.632535   13752 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0612 15:02:32.698218   13752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0612 15:02:33.730843   13752 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0612 15:02:33.730843   13752 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0612 15:02:33.730965   13752 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0612 15:02:33.730965   13752 command_runner.go:130] > daemonset.apps/kindnet configured
	I0612 15:02:33.731006   13752 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.0327847s)
	I0612 15:02:33.731075   13752 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 15:02:33.731132   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods
	I0612 15:02:33.731132   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:33.731132   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:33.731132   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:33.740158   13752 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0612 15:02:33.741945   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:33.742002   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:33.742002   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:33.742034   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:33.742034   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:33 GMT
	I0612 15:02:33.742034   13752 round_trippers.go:580]     Audit-Id: 5839a38a-4275-42e5-a4af-5068719c0c68
	I0612 15:02:33.742034   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:33.744890   13752 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1790"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87778 chars]
	I0612 15:02:33.753330   13752 system_pods.go:59] 12 kube-system pods found
	I0612 15:02:33.753422   13752 system_pods.go:61] "coredns-7db6d8ff4d-vgcxw" [c5bd143a-d39e-46af-9308-0a97bb45729c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0612 15:02:33.753456   13752 system_pods.go:61] "etcd-multinode-025000" [be41c4a6-88ce-4e08-9b7c-16c0b4f3eec2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0612 15:02:33.753456   13752 system_pods.go:61] "kindnet-8252q" [b1c2b9b3-0fd6-4393-b818-e7e823f89acc] Running
	I0612 15:02:33.753456   13752 system_pods.go:61] "kindnet-bqlg8" [1f004a05-3f5f-444b-9ac0-88f0e23da904] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0612 15:02:33.753456   13752 system_pods.go:61] "kindnet-v4cqk" [31faf6fc-5371-4f19-b71f-0a41b6dd2f79] Running
	I0612 15:02:33.753500   13752 system_pods.go:61] "kube-apiserver-multinode-025000" [63e55411-d432-4e5a-becc-fae0887fecae] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0612 15:02:33.753500   13752 system_pods.go:61] "kube-controller-manager-multinode-025000" [68c9aa4f-49ee-439c-ad51-7943e65c0085] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0612 15:02:33.753500   13752 system_pods.go:61] "kube-proxy-47lr8" [10b24fa7-8eea-4fbb-ab18-404e853aa7ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0612 15:02:33.753500   13752 system_pods.go:61] "kube-proxy-7jwdg" [643030f7-b876-4243-bacc-04205e88cc9e] Running
	I0612 15:02:33.753500   13752 system_pods.go:61] "kube-proxy-tdcdp" [b623833c-ce55-46b1-a840-99b3143adac1] Running
	I0612 15:02:33.753500   13752 system_pods.go:61] "kube-scheduler-multinode-025000" [83b272cb-1286-47d8-bcb1-a66056dff2a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0612 15:02:33.753500   13752 system_pods.go:61] "storage-provisioner" [d20f7489-1aa1-44b8-9221-4d1849884be4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0612 15:02:33.753500   13752 system_pods.go:74] duration metric: took 22.3672ms to wait for pod list to return data ...
	I0612 15:02:33.753500   13752 node_conditions.go:102] verifying NodePressure condition ...
	I0612 15:02:33.753500   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes
	I0612 15:02:33.753500   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:33.753500   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:33.753500   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:33.754196   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:33.754196   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:33.754196   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:33.754196   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:33.754196   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:33.754196   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:33 GMT
	I0612 15:02:33.754196   13752 round_trippers.go:580]     Audit-Id: 0f6b96c8-8308-4e48-9626-247692a01d6f
	I0612 15:02:33.754196   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:33.754196   13752 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1790"},"items":[{"metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15630 chars]
	I0612 15:02:33.759594   13752 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 15:02:33.759594   13752 node_conditions.go:123] node cpu capacity is 2
	I0612 15:02:33.759741   13752 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 15:02:33.759741   13752 node_conditions.go:123] node cpu capacity is 2
	I0612 15:02:33.759741   13752 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 15:02:33.759741   13752 node_conditions.go:123] node cpu capacity is 2
	I0612 15:02:33.759741   13752 node_conditions.go:105] duration metric: took 6.2418ms to run NodePressure ...
	I0612 15:02:33.759741   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 15:02:34.187972   13752 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0612 15:02:34.188040   13752 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0612 15:02:34.188135   13752 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0612 15:02:34.188164   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0612 15:02:34.188164   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:34.188164   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:34.188164   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:34.193817   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:34.193862   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:34.193862   13752 round_trippers.go:580]     Audit-Id: 99e1870d-e541-4341-b9c7-50f896d322cd
	I0612 15:02:34.193862   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:34.193919   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:34.193919   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:34.193919   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:34.193919   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:34 GMT
	I0612 15:02:34.195332   13752 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1796"},"items":[{"metadata":{"name":"etcd-multinode-025000","namespace":"kube-system","uid":"be41c4a6-88ce-4e08-9b7c-16c0b4f3eec2","resourceVersion":"1782","creationTimestamp":"2024-06-12T22:02:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.200.184:2379","kubernetes.io/config.hash":"7b6b5637642f3d915c0db1461c7074e6","kubernetes.io/config.mirror":"7b6b5637642f3d915c0db1461c7074e6","kubernetes.io/config.seen":"2024-06-12T22:02:25.563300686Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 30563 chars]
	I0612 15:02:34.196585   13752 kubeadm.go:733] kubelet initialised
	I0612 15:02:34.197133   13752 kubeadm.go:734] duration metric: took 8.4502ms waiting for restarted kubelet to initialise ...
	I0612 15:02:34.197133   13752 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 15:02:34.197298   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods
	I0612 15:02:34.197298   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:34.197298   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:34.197298   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:34.199703   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:02:34.199703   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:34.199703   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:34 GMT
	I0612 15:02:34.202944   13752 round_trippers.go:580]     Audit-Id: d3fb0f60-6aa8-4959-bd33-150c4513a34f
	I0612 15:02:34.202944   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:34.202944   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:34.202944   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:34.202944   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:34.207015   13752 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1796"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87185 chars]
	I0612 15:02:34.214499   13752 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace to be "Ready" ...
	I0612 15:02:34.215122   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:02:34.215122   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:34.215122   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:34.215122   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:34.215823   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:34.215823   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:34.215823   13752 round_trippers.go:580]     Audit-Id: 7ce271ef-8ae1-49bb-95c4-a3d4d2abc9ec
	I0612 15:02:34.215823   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:34.215823   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:34.215823   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:34.215823   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:34.215823   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:34 GMT
	I0612 15:02:34.219160   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:02:34.219759   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:34.219827   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:34.219827   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:34.219827   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:34.220082   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:34.223212   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:34.223212   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:34.223212   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:34.223212   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:34.223212   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:34 GMT
	I0612 15:02:34.223212   13752 round_trippers.go:580]     Audit-Id: f722ed58-6971-4345-9b46-8ca5a287bcc9
	I0612 15:02:34.223212   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:34.223511   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:34.223933   13752 pod_ready.go:97] node "multinode-025000" hosting pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000" has status "Ready":"False"
	I0612 15:02:34.224030   13752 pod_ready.go:81] duration metric: took 8.9076ms for pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace to be "Ready" ...
	E0612 15:02:34.224030   13752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-025000" hosting pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000" has status "Ready":"False"
	I0612 15:02:34.224030   13752 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:02:34.224136   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-025000
	I0612 15:02:34.224215   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:34.224247   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:34.224289   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:34.225942   13752 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 15:02:34.225942   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:34.225942   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:34.225942   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:34.225942   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:34 GMT
	I0612 15:02:34.225942   13752 round_trippers.go:580]     Audit-Id: e5ca8e60-6c70-4073-9a58-fb2e7f16d768
	I0612 15:02:34.227168   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:34.227168   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:34.227372   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-025000","namespace":"kube-system","uid":"be41c4a6-88ce-4e08-9b7c-16c0b4f3eec2","resourceVersion":"1782","creationTimestamp":"2024-06-12T22:02:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.200.184:2379","kubernetes.io/config.hash":"7b6b5637642f3d915c0db1461c7074e6","kubernetes.io/config.mirror":"7b6b5637642f3d915c0db1461c7074e6","kubernetes.io/config.seen":"2024-06-12T22:02:25.563300686Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6395 chars]
	I0612 15:02:34.227372   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:34.227372   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:34.227900   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:34.227900   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:34.228729   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:34.230321   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:34.230321   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:34.230321   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:34.230321   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:34.230321   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:34.230321   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:34 GMT
	I0612 15:02:34.230321   13752 round_trippers.go:580]     Audit-Id: dc20e2f9-b1f0-4b77-826b-a1fda5d20fcb
	I0612 15:02:34.230731   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:34.231191   13752 pod_ready.go:97] node "multinode-025000" hosting pod "etcd-multinode-025000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000" has status "Ready":"False"
	I0612 15:02:34.231236   13752 pod_ready.go:81] duration metric: took 7.206ms for pod "etcd-multinode-025000" in "kube-system" namespace to be "Ready" ...
	E0612 15:02:34.231268   13752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-025000" hosting pod "etcd-multinode-025000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000" has status "Ready":"False"
	I0612 15:02:34.231268   13752 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:02:34.231390   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-025000
	I0612 15:02:34.231390   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:34.231430   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:34.231443   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:34.231701   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:34.231701   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:34.231701   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:34.231701   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:34.231701   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:34.231701   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:34.231701   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:34 GMT
	I0612 15:02:34.231701   13752 round_trippers.go:580]     Audit-Id: 0ee8cf9b-f5f7-47c8-bd14-8445fb455245
	I0612 15:02:34.234484   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-025000","namespace":"kube-system","uid":"63e55411-d432-4e5a-becc-fae0887fecae","resourceVersion":"1781","creationTimestamp":"2024-06-12T22:02:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.200.184:8443","kubernetes.io/config.hash":"d6071cd4356268889f798790dc93ce06","kubernetes.io/config.mirror":"d6071cd4356268889f798790dc93ce06","kubernetes.io/config.seen":"2024-06-12T22:02:25.478872091Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7949 chars]
	I0612 15:02:34.235276   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:34.235276   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:34.235319   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:34.235319   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:34.237588   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:34.237588   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:34.237588   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:34.237588   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:34.237588   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:34.237588   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:34.237588   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:34 GMT
	I0612 15:02:34.237588   13752 round_trippers.go:580]     Audit-Id: 4a969dbb-56ac-46e9-b9be-37aaf45bc432
	I0612 15:02:34.237780   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:34.238155   13752 pod_ready.go:97] node "multinode-025000" hosting pod "kube-apiserver-multinode-025000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000" has status "Ready":"False"
	I0612 15:02:34.238239   13752 pod_ready.go:81] duration metric: took 6.9713ms for pod "kube-apiserver-multinode-025000" in "kube-system" namespace to be "Ready" ...
	E0612 15:02:34.238239   13752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-025000" hosting pod "kube-apiserver-multinode-025000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000" has status "Ready":"False"
	I0612 15:02:34.238239   13752 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:02:34.238357   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-025000
	I0612 15:02:34.238399   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:34.238399   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:34.238399   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:34.238629   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:34.238629   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:34.238629   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:34.238629   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:34 GMT
	I0612 15:02:34.238629   13752 round_trippers.go:580]     Audit-Id: a4666425-b1dd-4434-9cee-0f790a031a60
	I0612 15:02:34.238629   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:34.238629   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:34.241384   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:34.241750   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-025000","namespace":"kube-system","uid":"68c9aa4f-49ee-439c-ad51-7943e65c0085","resourceVersion":"1776","creationTimestamp":"2024-06-12T21:39:30Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"88de11d8b1aaec126153d44e87c4b5dd","kubernetes.io/config.mirror":"88de11d8b1aaec126153d44e87c4b5dd","kubernetes.io/config.seen":"2024-06-12T21:39:23.999674614Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0612 15:02:34.242359   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:34.242359   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:34.242359   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:34.242359   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:34.242587   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:34.245123   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:34.245123   13752 round_trippers.go:580]     Audit-Id: c17de349-28c4-4be2-b4f1-2b65d98679e3
	I0612 15:02:34.245123   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:34.245123   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:34.245123   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:34.245123   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:34.245123   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:34 GMT
	I0612 15:02:34.245257   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:34.246045   13752 pod_ready.go:97] node "multinode-025000" hosting pod "kube-controller-manager-multinode-025000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000" has status "Ready":"False"
	I0612 15:02:34.246045   13752 pod_ready.go:81] duration metric: took 7.7469ms for pod "kube-controller-manager-multinode-025000" in "kube-system" namespace to be "Ready" ...
	E0612 15:02:34.246045   13752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-025000" hosting pod "kube-controller-manager-multinode-025000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000" has status "Ready":"False"
	I0612 15:02:34.246045   13752 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-47lr8" in "kube-system" namespace to be "Ready" ...
	I0612 15:02:34.393279   13752 request.go:629] Waited for 147.0069ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-proxy-47lr8
	I0612 15:02:34.393466   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-proxy-47lr8
	I0612 15:02:34.393567   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:34.393585   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:34.393585   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:34.394318   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:34.394318   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:34.394318   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:34.394318   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:34 GMT
	I0612 15:02:34.394318   13752 round_trippers.go:580]     Audit-Id: f5c59989-bbd7-4295-8f5f-9718f26a43b5
	I0612 15:02:34.397732   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:34.397732   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:34.397732   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:34.398029   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-47lr8","generateName":"kube-proxy-","namespace":"kube-system","uid":"10b24fa7-8eea-4fbb-ab18-404e853aa7ab","resourceVersion":"1793","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b44c21bc-e2cc-415b-bc2f-616adabe0681","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b44c21bc-e2cc-415b-bc2f-616adabe0681\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0612 15:02:34.590194   13752 request.go:629] Waited for 190.796ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:34.590419   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:34.590419   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:34.590419   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:34.590419   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:34.590718   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:34.594373   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:34.594373   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:34.594373   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:34.594373   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:34 GMT
	I0612 15:02:34.594373   13752 round_trippers.go:580]     Audit-Id: b075564e-90f8-4821-a226-15e162bee9aa
	I0612 15:02:34.594373   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:34.594373   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:34.594699   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:34.595292   13752 pod_ready.go:97] node "multinode-025000" hosting pod "kube-proxy-47lr8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000" has status "Ready":"False"
	I0612 15:02:34.595365   13752 pod_ready.go:81] duration metric: took 349.3183ms for pod "kube-proxy-47lr8" in "kube-system" namespace to be "Ready" ...
	E0612 15:02:34.595365   13752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-025000" hosting pod "kube-proxy-47lr8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000" has status "Ready":"False"
	I0612 15:02:34.595365   13752 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7jwdg" in "kube-system" namespace to be "Ready" ...
	I0612 15:02:34.794840   13752 request.go:629] Waited for 199.1151ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7jwdg
	I0612 15:02:34.794967   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7jwdg
	I0612 15:02:34.794967   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:34.794967   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:34.794967   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:34.795388   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:34.795388   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:34.795388   13752 round_trippers.go:580]     Audit-Id: 1b0700c1-bfd2-44e2-a10f-884f0026a486
	I0612 15:02:34.795388   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:34.795388   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:34.795388   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:34.795388   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:34.795388   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:34 GMT
	I0612 15:02:34.798793   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7jwdg","generateName":"kube-proxy-","namespace":"kube-system","uid":"643030f7-b876-4243-bacc-04205e88cc9e","resourceVersion":"1748","creationTimestamp":"2024-06-12T21:47:16Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b44c21bc-e2cc-415b-bc2f-616adabe0681","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:47:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b44c21bc-e2cc-415b-bc2f-616adabe0681\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6062 chars]
	I0612 15:02:34.999437   13752 request.go:629] Waited for 199.9684ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m03
	I0612 15:02:34.999664   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m03
	I0612 15:02:34.999960   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:34.999960   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:34.999960   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:35.000226   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:35.000226   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:35.000226   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:35.000226   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:35.000226   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:35 GMT
	I0612 15:02:35.000226   13752 round_trippers.go:580]     Audit-Id: 1da4a651-be9d-4d48-b392-d674e54a35f9
	I0612 15:02:35.000226   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:35.000226   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:35.004160   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m03","uid":"9d457bc2-c46f-4b5d-8023-5c06ef6198c7","resourceVersion":"1760","creationTimestamp":"2024-06-12T21:57:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_57_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:57:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4399 chars]
	I0612 15:02:35.004640   13752 pod_ready.go:97] node "multinode-025000-m03" hosting pod "kube-proxy-7jwdg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000-m03" has status "Ready":"Unknown"
	I0612 15:02:35.004640   13752 pod_ready.go:81] duration metric: took 409.2736ms for pod "kube-proxy-7jwdg" in "kube-system" namespace to be "Ready" ...
	E0612 15:02:35.004640   13752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-025000-m03" hosting pod "kube-proxy-7jwdg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000-m03" has status "Ready":"Unknown"
	I0612 15:02:35.004640   13752 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tdcdp" in "kube-system" namespace to be "Ready" ...
	I0612 15:02:35.191442   13752 request.go:629] Waited for 186.4651ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tdcdp
	I0612 15:02:35.191737   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tdcdp
	I0612 15:02:35.191853   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:35.191853   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:35.191853   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:35.192136   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:35.192136   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:35.192136   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:35.192136   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:35.192136   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:35.192136   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:35.192136   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:35 GMT
	I0612 15:02:35.192136   13752 round_trippers.go:580]     Audit-Id: a48ecf18-e3a7-4b57-9825-4c50d5c19ced
	I0612 15:02:35.195522   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tdcdp","generateName":"kube-proxy-","namespace":"kube-system","uid":"b623833c-ce55-46b1-a840-99b3143adac1","resourceVersion":"637","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b44c21bc-e2cc-415b-bc2f-616adabe0681","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b44c21bc-e2cc-415b-bc2f-616adabe0681\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0612 15:02:35.403151   13752 request.go:629] Waited for 206.8154ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:02:35.403425   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:02:35.403425   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:35.403425   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:35.403425   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:35.403900   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:35.407929   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:35.407929   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:35.408032   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:35.408032   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:35.408032   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:35.408032   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:35 GMT
	I0612 15:02:35.408032   13752 round_trippers.go:580]     Audit-Id: a938f75c-91c1-42de-a957-e63452e95bac
	I0612 15:02:35.408215   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"1705","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3827 chars]
	I0612 15:02:35.408788   13752 pod_ready.go:92] pod "kube-proxy-tdcdp" in "kube-system" namespace has status "Ready":"True"
	I0612 15:02:35.408788   13752 pod_ready.go:81] duration metric: took 404.1473ms for pod "kube-proxy-tdcdp" in "kube-system" namespace to be "Ready" ...
	I0612 15:02:35.408876   13752 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:02:35.595108   13752 request.go:629] Waited for 185.8045ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-025000
	I0612 15:02:35.595108   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-025000
	I0612 15:02:35.595108   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:35.595108   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:35.595108   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:35.595640   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:35.595640   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:35.595640   13752 round_trippers.go:580]     Audit-Id: a816a944-d0b0-4787-bde1-73300e306955
	I0612 15:02:35.595640   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:35.595640   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:35.595640   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:35.595640   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:35.595640   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:35 GMT
	I0612 15:02:35.599079   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-025000","namespace":"kube-system","uid":"83b272cb-1286-47d8-bcb1-a66056dff2a5","resourceVersion":"1778","creationTimestamp":"2024-06-12T21:39:31Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"de62e7fd7d0feea82620e745032c1a67","kubernetes.io/config.mirror":"de62e7fd7d0feea82620e745032c1a67","kubernetes.io/config.seen":"2024-06-12T21:39:31.214466565Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5449 chars]
	I0612 15:02:35.792960   13752 request.go:629] Waited for 193.1829ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:35.793029   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:35.793193   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:35.793193   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:35.793193   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:35.794082   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:35.794082   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:35.794082   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:35.794082   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:35.794082   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:35 GMT
	I0612 15:02:35.794082   13752 round_trippers.go:580]     Audit-Id: 4b57cf87-db8c-4127-95ec-77335c84f0cb
	I0612 15:02:35.794082   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:35.794082   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:35.798282   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:35.798899   13752 pod_ready.go:97] node "multinode-025000" hosting pod "kube-scheduler-multinode-025000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000" has status "Ready":"False"
	I0612 15:02:35.798899   13752 pod_ready.go:81] duration metric: took 390.0212ms for pod "kube-scheduler-multinode-025000" in "kube-system" namespace to be "Ready" ...
	E0612 15:02:35.798899   13752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-025000" hosting pod "kube-scheduler-multinode-025000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000" has status "Ready":"False"
	I0612 15:02:35.798899   13752 pod_ready.go:38] duration metric: took 1.6017111s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 15:02:35.798899   13752 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0612 15:02:35.818162   13752 command_runner.go:130] > -16
	I0612 15:02:35.818243   13752 ops.go:34] apiserver oom_adj: -16
	I0612 15:02:35.818243   13752 kubeadm.go:591] duration metric: took 12.9168217s to restartPrimaryControlPlane
	I0612 15:02:35.818243   13752 kubeadm.go:393] duration metric: took 12.9773558s to StartCluster
	I0612 15:02:35.818243   13752 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 15:02:35.818470   13752 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 15:02:35.819880   13752 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 15:02:35.821386   13752 start.go:234] Will wait 6m0s for node &{Name: IP:172.23.200.184 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0612 15:02:35.825251   13752 out.go:177] * Verifying Kubernetes components...
	I0612 15:02:35.821386   13752 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0612 15:02:35.821721   13752 config.go:182] Loaded profile config "multinode-025000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 15:02:35.829956   13752 out.go:177] * Enabled addons: 
	I0612 15:02:35.832597   13752 addons.go:510] duration metric: took 11.2115ms for enable addons: enabled=[]
	I0612 15:02:35.838299   13752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 15:02:36.101038   13752 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 15:02:36.128543   13752 node_ready.go:35] waiting up to 6m0s for node "multinode-025000" to be "Ready" ...
	I0612 15:02:36.128797   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:36.128797   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:36.128895   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:36.128895   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:36.132997   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:02:36.132997   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:36.132997   13752 round_trippers.go:580]     Audit-Id: 89fffe12-9015-4f4d-97e3-025b69d22ee9
	I0612 15:02:36.132997   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:36.132997   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:36.132997   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:36.132997   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:36.132997   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:36 GMT
	I0612 15:02:36.133145   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:36.635678   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:36.635678   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:36.635678   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:36.635678   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:36.636239   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:36.640856   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:36.640856   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:36 GMT
	I0612 15:02:36.640856   13752 round_trippers.go:580]     Audit-Id: 3952db6d-bbab-4bbf-9c03-750625fb84bf
	I0612 15:02:36.640856   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:36.640856   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:36.640856   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:36.640856   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:36.640856   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:37.132613   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:37.132655   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:37.132655   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:37.132692   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:37.136199   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:02:37.136261   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:37.136334   13752 round_trippers.go:580]     Audit-Id: ff630076-938d-4400-85c7-004cc7173a13
	I0612 15:02:37.136334   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:37.136370   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:37.136370   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:37.136370   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:37.136370   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:37 GMT
	I0612 15:02:37.136501   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:37.647259   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:37.647259   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:37.647259   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:37.647259   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:37.647864   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:37.647864   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:37.647864   13752 round_trippers.go:580]     Audit-Id: 812419be-64fe-49f6-83cb-4a9a56ae3352
	I0612 15:02:37.647864   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:37.647864   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:37.647864   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:37.647864   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:37.647864   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:37 GMT
	I0612 15:02:37.650905   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:38.141961   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:38.141961   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:38.141961   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:38.141961   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:38.142456   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:38.142456   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:38.142456   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:38.142456   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:38.142456   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:38 GMT
	I0612 15:02:38.142456   13752 round_trippers.go:580]     Audit-Id: 0e354682-3445-4c89-b736-dc53748461b6
	I0612 15:02:38.142456   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:38.142456   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:38.147187   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:38.147187   13752 node_ready.go:53] node "multinode-025000" has status "Ready":"False"
	I0612 15:02:38.630171   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:38.630171   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:38.630171   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:38.630171   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:38.630940   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:38.630940   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:38.630940   13752 round_trippers.go:580]     Audit-Id: 4a11168f-1812-40ed-b5e0-5f14a097ecec
	I0612 15:02:38.630940   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:38.630940   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:38.630940   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:38.630940   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:38.630940   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:38 GMT
	I0612 15:02:38.635455   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:39.135194   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:39.135194   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:39.135194   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:39.135194   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:39.135739   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:39.135739   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:39.135739   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:39.135739   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:39 GMT
	I0612 15:02:39.135739   13752 round_trippers.go:580]     Audit-Id: 6448042b-250c-416e-85bc-e747e2aa29c3
	I0612 15:02:39.135739   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:39.135739   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:39.135739   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:39.140553   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:39.639066   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:39.639066   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:39.639066   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:39.639066   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:39.649390   13752 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0612 15:02:39.649516   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:39.649516   13752 round_trippers.go:580]     Audit-Id: 4dcf51cd-5311-4e53-a0f3-00c0524677b0
	I0612 15:02:39.649516   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:39.649516   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:39.649516   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:39.649516   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:39.649516   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:39 GMT
	I0612 15:02:39.649675   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:40.133038   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:40.133038   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:40.133038   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:40.133038   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:40.133324   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:40.133324   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:40.133324   13752 round_trippers.go:580]     Audit-Id: fbf68be7-b29a-4d0c-aa4d-21ce19c4f793
	I0612 15:02:40.133324   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:40.133324   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:40.133324   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:40.133324   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:40.133324   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:40 GMT
	I0612 15:02:40.137932   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:40.642693   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:40.642693   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:40.642693   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:40.642693   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:40.643181   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:40.643181   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:40.643181   13752 round_trippers.go:580]     Audit-Id: 68b76927-7da0-4ebf-9574-2624b0275910
	I0612 15:02:40.643181   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:40.643181   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:40.643181   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:40.643181   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:40.643181   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:40 GMT
	I0612 15:02:40.647273   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:40.647615   13752 node_ready.go:53] node "multinode-025000" has status "Ready":"False"
	I0612 15:02:41.137695   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:41.137695   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:41.137695   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:41.137695   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:41.147822   13752 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0612 15:02:41.147822   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:41.147822   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:41 GMT
	I0612 15:02:41.147822   13752 round_trippers.go:580]     Audit-Id: 3a216226-4bf9-46da-a15d-6976309e7b9b
	I0612 15:02:41.147822   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:41.147822   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:41.147822   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:41.147822   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:41.148598   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:41.633907   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:41.633907   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:41.633907   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:41.633907   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:41.634425   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:41.634425   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:41.634425   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:41.634425   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:41.634425   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:41 GMT
	I0612 15:02:41.641707   13752 round_trippers.go:580]     Audit-Id: 0239a8b4-6a11-4367-9646-7da0224b27ac
	I0612 15:02:41.641707   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:41.641707   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:41.641957   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:42.130844   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:42.131026   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:42.131026   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:42.131026   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:42.131306   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:42.131306   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:42.131306   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:42.131306   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:42.131306   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:42.131306   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:42.131306   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:42 GMT
	I0612 15:02:42.131306   13752 round_trippers.go:580]     Audit-Id: 83e98fcb-fd7b-4a20-9518-283c77f823a0
	I0612 15:02:42.135630   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:42.639185   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:42.639481   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:42.639481   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:42.639481   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:42.639726   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:42.643357   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:42.643357   13752 round_trippers.go:580]     Audit-Id: 1e2f2d66-114f-4a95-9daf-170229786432
	I0612 15:02:42.643357   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:42.643357   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:42.643357   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:42.643357   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:42.643443   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:42 GMT
	I0612 15:02:42.643996   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:43.131108   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:43.131177   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:43.131177   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:43.131177   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:43.135474   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:43.135574   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:43.135574   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:43.135574   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:43 GMT
	I0612 15:02:43.135706   13752 round_trippers.go:580]     Audit-Id: 09ab55b2-3ca9-4e46-be41-8685a43593d9
	I0612 15:02:43.135706   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:43.135706   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:43.135706   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:43.135853   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:43.135853   13752 node_ready.go:53] node "multinode-025000" has status "Ready":"False"
	I0612 15:02:43.634264   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:43.634264   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:43.634264   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:43.634264   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:43.640359   13752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 15:02:43.640359   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:43.640359   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:43.640359   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:43.640359   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:43 GMT
	I0612 15:02:43.640359   13752 round_trippers.go:580]     Audit-Id: 970fec84-f03e-43dd-8131-863be9b1c3f0
	I0612 15:02:43.640359   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:43.640359   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:43.640830   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:44.129509   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:44.129543   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:44.129592   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:44.129592   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:44.137387   13752 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 15:02:44.137387   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:44.137387   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:44.137387   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:44.137387   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:44.137387   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:44 GMT
	I0612 15:02:44.137387   13752 round_trippers.go:580]     Audit-Id: 208e9f5b-f5d7-484d-a5a5-de055d63ac5e
	I0612 15:02:44.137387   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:44.143304   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:44.641420   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:44.641420   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:44.641420   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:44.641420   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:44.641965   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:44.641965   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:44.641965   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:44 GMT
	I0612 15:02:44.641965   13752 round_trippers.go:580]     Audit-Id: 0807ccb1-8c7b-4fad-8d8e-a11488a690f5
	I0612 15:02:44.641965   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:44.645681   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:44.645681   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:44.645681   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:44.645942   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:45.138092   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:45.138092   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:45.138092   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:45.138092   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:45.138660   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:45.142763   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:45.142763   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:45.142763   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:45.142763   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:45.142860   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:45.142860   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:45 GMT
	I0612 15:02:45.142860   13752 round_trippers.go:580]     Audit-Id: 74090bdc-81fe-4d10-beab-310299bdab1c
	I0612 15:02:45.142929   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:45.143515   13752 node_ready.go:53] node "multinode-025000" has status "Ready":"False"
	I0612 15:02:45.645463   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:45.645463   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:45.645463   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:45.645463   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:45.646925   13752 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 15:02:45.646925   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:45.646925   13752 round_trippers.go:580]     Audit-Id: 2c03576a-b851-430e-9a2d-a3a9be3e6a8f
	I0612 15:02:45.646925   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:45.646925   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:45.648810   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:45.648810   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:45.648810   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:45 GMT
	I0612 15:02:45.649206   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:46.142845   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:46.142845   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:46.142909   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:46.142909   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:46.148125   13752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 15:02:46.148549   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:46.148549   13752 round_trippers.go:580]     Audit-Id: 5a03aa75-28ea-491d-946f-5dabe045d8a0
	I0612 15:02:46.148549   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:46.148549   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:46.148606   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:46.148606   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:46.148606   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:46 GMT
	I0612 15:02:46.148825   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:46.632705   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:46.632705   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:46.632705   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:46.632705   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:46.635221   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:02:46.635221   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:46.636373   13752 round_trippers.go:580]     Audit-Id: f2f569ba-307c-4e79-af70-e472246d5a9d
	I0612 15:02:46.636373   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:46.636373   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:46.636373   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:46.636373   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:46.636373   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:46 GMT
	I0612 15:02:46.636373   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:47.143878   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:47.143943   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:47.143943   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:47.143943   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:47.144278   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:47.148129   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:47.148129   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:47 GMT
	I0612 15:02:47.148129   13752 round_trippers.go:580]     Audit-Id: 0492a9d8-b02e-4f1f-9105-9c1179c328b1
	I0612 15:02:47.148129   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:47.148129   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:47.148129   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:47.148129   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:47.148129   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:47.148876   13752 node_ready.go:53] node "multinode-025000" has status "Ready":"False"
	I0612 15:02:47.648494   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:47.648494   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:47.648494   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:47.648494   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:47.649021   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:47.652001   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:47.652001   13752 round_trippers.go:580]     Audit-Id: fcf74f3f-743d-4726-9229-ca7c555f6e86
	I0612 15:02:47.652001   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:47.652001   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:47.652001   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:47.652001   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:47.652001   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:47 GMT
	I0612 15:02:47.652001   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:48.139572   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:48.139572   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:48.139572   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:48.139572   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:48.143679   13752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 15:02:48.143679   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:48.143679   13752 round_trippers.go:580]     Audit-Id: a43b60e4-5610-413d-acee-c6af0d4c21a4
	I0612 15:02:48.143679   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:48.143679   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:48.143679   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:48.143679   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:48.143679   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:48 GMT
	I0612 15:02:48.143679   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:48.638819   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:48.638819   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:48.638819   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:48.638819   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:48.642814   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:48.642852   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:48.642852   13752 round_trippers.go:580]     Audit-Id: e42aec87-e0a4-4f26-966d-07a90a72a008
	I0612 15:02:48.642852   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:48.642852   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:48.642852   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:48.642852   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:48.642852   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:48 GMT
	I0612 15:02:48.642852   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:49.134581   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:49.134879   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:49.134879   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:49.134879   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:49.135233   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:49.139052   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:49.139052   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:49.139052   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:49.139052   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:49.139052   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:49 GMT
	I0612 15:02:49.139052   13752 round_trippers.go:580]     Audit-Id: ec8c20ff-2ea1-4b7c-9baf-af3664a76318
	I0612 15:02:49.139052   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:49.139052   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:49.650214   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:49.650302   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:49.650302   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:49.650302   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:49.651082   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:49.654312   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:49.654312   13752 round_trippers.go:580]     Audit-Id: ce842bbc-fc75-4e5d-bd62-ce1df2837521
	I0612 15:02:49.654312   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:49.654312   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:49.654312   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:49.654312   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:49.654312   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:49 GMT
	I0612 15:02:49.654312   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:49.655281   13752 node_ready.go:53] node "multinode-025000" has status "Ready":"False"
	I0612 15:02:50.139543   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:50.139543   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:50.139543   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:50.139543   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:50.140279   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:50.140279   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:50.140279   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:50.140279   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:50.140279   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:50 GMT
	I0612 15:02:50.140279   13752 round_trippers.go:580]     Audit-Id: f5c4537b-9bfa-470c-939a-1d80375bb472
	I0612 15:02:50.140279   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:50.140279   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:50.144268   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:50.632735   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:50.632735   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:50.632835   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:50.632835   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:50.633097   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:50.633097   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:50.633097   13752 round_trippers.go:580]     Audit-Id: b17a24aa-062c-4d56-ab19-217ea2c97d68
	I0612 15:02:50.633097   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:50.637060   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:50.637060   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:50.637060   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:50.637060   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:50 GMT
	I0612 15:02:50.637404   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:51.139845   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:51.140137   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:51.140137   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:51.140137   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:51.140525   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:51.144482   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:51.144482   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:51.144482   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:51.144550   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:51 GMT
	I0612 15:02:51.144550   13752 round_trippers.go:580]     Audit-Id: 66121eeb-914c-49b0-989a-cb0ab3eea56d
	I0612 15:02:51.144550   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:51.144550   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:51.144707   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:51.641552   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:51.641552   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:51.641552   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:51.641552   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:51.647911   13752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 15:02:51.647911   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:51.647911   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:51.647911   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:51.647911   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:51.647911   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:51 GMT
	I0612 15:02:51.647911   13752 round_trippers.go:580]     Audit-Id: 4ea5d0e8-aa79-48ab-836c-8f7901a76124
	I0612 15:02:51.647911   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:51.649338   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:52.140387   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:52.140387   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:52.140387   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:52.140387   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:52.140969   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:52.140969   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:52.144528   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:52.144528   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:52.144528   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:52 GMT
	I0612 15:02:52.144528   13752 round_trippers.go:580]     Audit-Id: 6bda08b4-29f0-44e2-bd05-400d886a7037
	I0612 15:02:52.144528   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:52.144528   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:52.144756   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:52.145509   13752 node_ready.go:53] node "multinode-025000" has status "Ready":"False"
	I0612 15:02:52.638166   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:52.638166   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:52.638166   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:52.638166   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:52.642174   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:52.642216   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:52.642216   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:52.642216   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:52.642216   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:52 GMT
	I0612 15:02:52.642216   13752 round_trippers.go:580]     Audit-Id: 507d1e03-9e52-4dd7-b69d-8b4f7405f2ad
	I0612 15:02:52.642216   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:52.642216   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:52.642216   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:53.131499   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:53.131646   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:53.131646   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:53.131646   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:53.132789   13752 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 15:02:53.132789   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:53.132789   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:53 GMT
	I0612 15:02:53.132789   13752 round_trippers.go:580]     Audit-Id: 34737c82-f4ec-4d79-a4c7-6ea39c4ac9d0
	I0612 15:02:53.132789   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:53.136551   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:53.136551   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:53.136551   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:53.136661   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:53.630323   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:53.630323   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:53.630323   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:53.630323   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:53.631042   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:53.631042   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:53.631042   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:53.631042   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:53 GMT
	I0612 15:02:53.638687   13752 round_trippers.go:580]     Audit-Id: 2e0e1d38-01a3-477b-809b-a0188a92a062
	I0612 15:02:53.638687   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:53.638687   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:53.638687   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:53.638840   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:54.141746   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:54.141746   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:54.141746   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:54.142038   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:54.142315   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:54.142315   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:54.142315   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:54.142315   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:54.142315   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:54.142315   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:54.142315   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:54 GMT
	I0612 15:02:54.142315   13752 round_trippers.go:580]     Audit-Id: de69b983-c705-4010-ad85-301ab4e0aaea
	I0612 15:02:54.148042   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:54.148495   13752 node_ready.go:53] node "multinode-025000" has status "Ready":"False"
	I0612 15:02:54.639066   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:54.639066   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:54.639066   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:54.639066   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:54.639632   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:54.639632   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:54.639632   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:54.639632   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:54.639632   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:54.639632   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:54.642270   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:54 GMT
	I0612 15:02:54.642270   13752 round_trippers.go:580]     Audit-Id: b9d6ca66-7d48-4f48-bcd1-b8ecfd9b7d86
	I0612 15:02:54.642501   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:55.145356   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:55.145356   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:55.145356   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:55.145356   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:55.146606   13752 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 15:02:55.146606   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:55.146606   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:55 GMT
	I0612 15:02:55.146606   13752 round_trippers.go:580]     Audit-Id: e63b3525-57ba-4188-9b45-53c338b92e78
	I0612 15:02:55.149891   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:55.149891   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:55.149891   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:55.149891   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:55.150208   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:55.641073   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:55.641148   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:55.641148   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:55.641173   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:55.647972   13752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 15:02:55.647972   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:55.647972   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:55.647972   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:55 GMT
	I0612 15:02:55.647972   13752 round_trippers.go:580]     Audit-Id: cad13f2a-c5cb-480b-bae3-8323c4b4714c
	I0612 15:02:55.647972   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:55.647972   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:55.647972   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:55.649384   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:56.139188   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:56.139414   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:56.139414   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:56.139414   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:56.144250   13752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 15:02:56.144292   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:56.144376   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:56.144376   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:56 GMT
	I0612 15:02:56.144376   13752 round_trippers.go:580]     Audit-Id: 8a136706-01ca-40f4-ab91-162f3f44cfe1
	I0612 15:02:56.144411   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:56.144411   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:56.144411   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:56.144652   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:56.631947   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:56.631947   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:56.632022   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:56.632022   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:56.632819   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:56.632819   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:56.632819   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:56.632819   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:56.636461   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:56 GMT
	I0612 15:02:56.636461   13752 round_trippers.go:580]     Audit-Id: 91d64590-5e37-4085-8fbb-4d81bbef2ef6
	I0612 15:02:56.636461   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:56.636461   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:56.636759   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:56.637292   13752 node_ready.go:53] node "multinode-025000" has status "Ready":"False"
	I0612 15:02:57.142358   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:57.142595   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:57.142595   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:57.142595   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:57.143466   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:57.143466   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:57.143466   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:57 GMT
	I0612 15:02:57.148279   13752 round_trippers.go:580]     Audit-Id: 20fe3bb4-3608-43a8-817d-c6e2be21ad07
	I0612 15:02:57.148279   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:57.148279   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:57.148279   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:57.148403   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:57.148599   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:57.646718   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:57.646872   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:57.646872   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:57.646959   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:57.650358   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:02:57.650358   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:57.650358   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:57.650358   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:57.650358   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:57.650358   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:57 GMT
	I0612 15:02:57.650358   13752 round_trippers.go:580]     Audit-Id: 2c9b8312-977d-477e-8761-04fe49ea7782
	I0612 15:02:57.650358   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:57.650358   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:58.142949   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:58.142949   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:58.142949   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:58.142949   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:58.143572   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:58.147512   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:58.147512   13752 round_trippers.go:580]     Audit-Id: 686289a3-1b37-4337-b1df-4232076139e7
	I0612 15:02:58.147512   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:58.147646   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:58.147774   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:58.147774   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:58.147774   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:58 GMT
	I0612 15:02:58.147862   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:58.644345   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:58.644345   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:58.644457   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:58.644457   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:58.644829   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:58.644829   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:58.644829   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:58 GMT
	I0612 15:02:58.644829   13752 round_trippers.go:580]     Audit-Id: 195dd45e-17fd-458b-8a15-08496e7ab7d7
	I0612 15:02:58.644829   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:58.644829   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:58.644829   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:58.648338   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:58.648483   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:58.649096   13752 node_ready.go:53] node "multinode-025000" has status "Ready":"False"
	I0612 15:02:59.135237   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:59.135237   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:59.135237   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:59.135237   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:59.135673   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:59.139347   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:59.139434   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:59 GMT
	I0612 15:02:59.139434   13752 round_trippers.go:580]     Audit-Id: 3021bd87-cb23-4982-981e-1880cc6e7256
	I0612 15:02:59.139434   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:59.139434   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:59.139434   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:59.139434   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:59.139434   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:59.644639   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:59.644639   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:59.644639   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:59.644639   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:59.648010   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:02:59.648244   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:59.648244   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:59 GMT
	I0612 15:02:59.648244   13752 round_trippers.go:580]     Audit-Id: 976a5857-7f72-463a-833a-78a5cc6ae3d8
	I0612 15:02:59.648381   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:59.648381   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:59.648381   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:59.648381   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:59.648756   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:00.135449   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:00.135449   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:00.135449   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:00.135449   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:00.136003   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:00.140457   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:00.140457   13752 round_trippers.go:580]     Audit-Id: 11f3cb50-dd28-407e-a21e-cb93ae42961f
	I0612 15:03:00.140457   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:00.140457   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:00.140457   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:00.140457   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:00.140457   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:00 GMT
	I0612 15:03:00.140457   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:00.634722   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:00.634722   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:00.634722   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:00.634722   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:00.635365   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:00.635365   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:00.635365   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:00 GMT
	I0612 15:03:00.635365   13752 round_trippers.go:580]     Audit-Id: 0b91b955-f405-42b7-b790-f34b045553ec
	I0612 15:03:00.635365   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:00.635365   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:00.639493   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:00.639493   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:00.639909   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:01.143191   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:01.143417   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:01.143417   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:01.143417   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:01.143697   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:01.147527   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:01.147527   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:01 GMT
	I0612 15:03:01.147527   13752 round_trippers.go:580]     Audit-Id: d5040ae2-0164-4654-b24c-1ff69481062b
	I0612 15:03:01.147527   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:01.147527   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:01.147527   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:01.147611   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:01.147679   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:01.148368   13752 node_ready.go:53] node "multinode-025000" has status "Ready":"False"
	I0612 15:03:01.641872   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:01.642115   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:01.642115   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:01.642115   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:01.642440   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:01.642440   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:01.642440   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:01.642440   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:01 GMT
	I0612 15:03:01.642440   13752 round_trippers.go:580]     Audit-Id: ba3754fc-afe9-49a0-a96c-bbf267bf2a10
	I0612 15:03:01.642440   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:01.642440   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:01.642440   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:01.645720   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:02.129515   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:02.129515   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:02.129614   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:02.129614   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:02.130242   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:02.133528   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:02.133528   13752 round_trippers.go:580]     Audit-Id: 2d2e6495-76f7-4207-96a9-aa13cc893089
	I0612 15:03:02.133528   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:02.133528   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:02.133528   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:02.133528   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:02.133528   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:02 GMT
	I0612 15:03:02.133782   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:02.633527   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:02.633527   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:02.633778   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:02.633778   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:02.637700   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:02.637700   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:02.637700   13752 round_trippers.go:580]     Audit-Id: 45f84608-c6da-4334-ac7e-ddcc400b8087
	I0612 15:03:02.637700   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:02.637700   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:02.637700   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:02.637700   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:02.637700   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:02 GMT
	I0612 15:03:02.637995   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:03.132689   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:03.132689   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:03.132689   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:03.132689   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:03.133669   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:03.133669   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:03.136989   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:03 GMT
	I0612 15:03:03.136989   13752 round_trippers.go:580]     Audit-Id: 772a8546-5d13-47eb-bf6d-ed3ebd156e02
	I0612 15:03:03.136989   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:03.136989   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:03.136989   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:03.136989   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:03.137424   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:03.647711   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:03.647711   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:03.647711   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:03.647903   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:03.648504   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:03.648504   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:03.651570   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:03.651570   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:03.651570   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:03.651570   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:03.651570   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:03 GMT
	I0612 15:03:03.651570   13752 round_trippers.go:580]     Audit-Id: 4d2eb9e7-e070-451f-bc89-bb0ae6450467
	I0612 15:03:03.651678   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:03.652127   13752 node_ready.go:53] node "multinode-025000" has status "Ready":"False"
	I0612 15:03:04.141516   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:04.141647   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:04.141647   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:04.141647   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:04.148671   13752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 15:03:04.148671   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:04.148756   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:04.148756   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:04.148756   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:04.148756   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:04.148791   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:04 GMT
	I0612 15:03:04.148791   13752 round_trippers.go:580]     Audit-Id: e9e220eb-dbba-4ae7-b7cf-c873aeb24231
	I0612 15:03:04.148923   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:04.630964   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:04.631108   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:04.631108   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:04.631108   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:04.631390   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:04.631390   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:04.631390   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:04.635298   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:04.635298   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:04.635298   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:04 GMT
	I0612 15:03:04.635298   13752 round_trippers.go:580]     Audit-Id: c876ddc5-a24b-429a-92e2-a4ad20a7d83b
	I0612 15:03:04.635298   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:04.635678   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:05.140706   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:05.141249   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:05.141249   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:05.141249   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:05.141891   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:05.141891   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:05.141891   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:05.141891   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:05.141891   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:05.141891   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:05.141891   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:05 GMT
	I0612 15:03:05.141891   13752 round_trippers.go:580]     Audit-Id: e9b74051-8e77-438e-887e-2c05705a3f63
	I0612 15:03:05.146329   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:05.645551   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:05.645810   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:05.645810   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:05.645810   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:05.654583   13752 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0612 15:03:05.654583   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:05.654583   13752 round_trippers.go:580]     Audit-Id: f7197bd1-0f77-486d-b5cc-dbeced0b88be
	I0612 15:03:05.654583   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:05.654583   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:05.654583   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:05.654583   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:05.654583   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:05 GMT
	I0612 15:03:05.655139   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:05.655384   13752 node_ready.go:53] node "multinode-025000" has status "Ready":"False"
	I0612 15:03:06.135845   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:06.136081   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:06.136081   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:06.136081   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:06.136855   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:06.136855   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:06.140440   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:06.140440   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:06 GMT
	I0612 15:03:06.140440   13752 round_trippers.go:580]     Audit-Id: 1de4c03e-90fa-4dae-904c-fee95d24c0bf
	I0612 15:03:06.140440   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:06.140440   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:06.140440   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:06.140440   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:06.635670   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:06.635754   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:06.635754   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:06.635754   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:06.636503   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:06.636503   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:06.642930   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:06.642930   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:06.642930   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:06 GMT
	I0612 15:03:06.642930   13752 round_trippers.go:580]     Audit-Id: d7f8487c-4817-423b-8eae-ba1900959d38
	I0612 15:03:06.642930   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:06.642930   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:06.643248   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:07.137764   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:07.138048   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:07.138048   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:07.138048   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:07.138386   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:07.138386   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:07.138386   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:07 GMT
	I0612 15:03:07.138386   13752 round_trippers.go:580]     Audit-Id: 6c36857e-9227-4604-86f4-8655dfa27dda
	I0612 15:03:07.138386   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:07.138386   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:07.138386   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:07.138386   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:07.142146   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:07.646735   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:07.646841   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:07.646841   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:07.646841   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:07.647254   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:07.647254   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:07.647254   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:07.647254   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:07.647254   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:07 GMT
	I0612 15:03:07.647254   13752 round_trippers.go:580]     Audit-Id: 94e6a594-4dc6-4dfd-8190-34a2dea4e2e2
	I0612 15:03:07.647254   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:07.647254   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:07.651514   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:08.130727   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:08.130727   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:08.130727   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:08.130727   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:08.131272   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:08.131272   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:08.131272   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:08.131272   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:08.134600   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:08 GMT
	I0612 15:03:08.134600   13752 round_trippers.go:580]     Audit-Id: 12a6382e-fd3f-4872-b20a-513d3ad54caf
	I0612 15:03:08.134600   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:08.134600   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:08.134660   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:08.135545   13752 node_ready.go:53] node "multinode-025000" has status "Ready":"False"
	I0612 15:03:08.633307   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:08.633399   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:08.633399   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:08.633399   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:08.633711   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:08.637571   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:08.637571   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:08.637571   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:08.637571   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:08.637571   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:08.637571   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:08 GMT
	I0612 15:03:08.637571   13752 round_trippers.go:580]     Audit-Id: 12b95389-425c-4953-b8ba-d0bfdb2dc80e
	I0612 15:03:08.637929   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:09.137989   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:09.138245   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:09.138245   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:09.138245   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:09.138616   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:09.138616   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:09.138616   13752 round_trippers.go:580]     Audit-Id: 49773ac3-55d3-4a9e-9386-c93c496422c3
	I0612 15:03:09.138616   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:09.138616   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:09.138616   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:09.138616   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:09.138616   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:09 GMT
	I0612 15:03:09.143658   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:09.641031   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:09.641031   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:09.641217   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:09.641217   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:09.641605   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:09.641605   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:09.641605   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:09.641605   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:09 GMT
	I0612 15:03:09.641605   13752 round_trippers.go:580]     Audit-Id: ae098f0a-dca6-44e1-ae2b-ba2a8ba8b6d8
	I0612 15:03:09.641605   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:09.641605   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:09.641605   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:09.645198   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:10.130121   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:10.130328   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:10.130328   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:10.130328   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:10.137031   13752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 15:03:10.137031   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:10.137031   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:10.137031   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:10.137031   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:10 GMT
	I0612 15:03:10.137031   13752 round_trippers.go:580]     Audit-Id: e5623e7f-8f39-4ea7-9b2f-9c677bb51e0a
	I0612 15:03:10.137031   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:10.137031   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:10.137671   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:10.138356   13752 node_ready.go:53] node "multinode-025000" has status "Ready":"False"
	I0612 15:03:10.635994   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:10.636065   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:10.636065   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:10.636134   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:10.636429   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:10.639993   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:10.639993   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:10.639993   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:10.640071   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:10.640071   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:10 GMT
	I0612 15:03:10.640071   13752 round_trippers.go:580]     Audit-Id: b3bddf79-9ee0-493b-9360-98d8b1173aca
	I0612 15:03:10.640071   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:10.640109   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:11.141820   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:11.141820   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:11.141820   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:11.142167   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:11.142800   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:11.147487   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:11.147487   13752 round_trippers.go:580]     Audit-Id: c97af60e-dc32-48da-90ff-43d0f4196364
	I0612 15:03:11.147487   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:11.147487   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:11.147487   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:11.147487   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:11.147546   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:11 GMT
	I0612 15:03:11.147648   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:11.645558   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:11.645754   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:11.645754   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:11.645754   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:11.651733   13752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 15:03:11.651733   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:11.651733   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:11.651786   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:11.651786   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:11.651786   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:11.651809   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:11 GMT
	I0612 15:03:11.651809   13752 round_trippers.go:580]     Audit-Id: d4fd169c-fd3c-4974-a7b4-a94bcf0f43f8
	I0612 15:03:11.651838   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:12.139831   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:12.139905   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:12.139905   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:12.139905   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:12.143452   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:03:12.143452   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:12.143452   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:12.143452   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:12.143452   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:12 GMT
	I0612 15:03:12.143452   13752 round_trippers.go:580]     Audit-Id: 5608a3eb-4686-445b-b409-8c5557525254
	I0612 15:03:12.143452   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:12.143452   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:12.143452   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1935","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0612 15:03:12.144281   13752 node_ready.go:49] node "multinode-025000" has status "Ready":"True"
	I0612 15:03:12.144343   13752 node_ready.go:38] duration metric: took 36.0155064s for node "multinode-025000" to be "Ready" ...
	I0612 15:03:12.144343   13752 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 15:03:12.144469   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods
	I0612 15:03:12.144469   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:12.144537   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:12.144537   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:12.149807   13752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 15:03:12.149807   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:12.149807   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:12.149807   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:12 GMT
	I0612 15:03:12.149807   13752 round_trippers.go:580]     Audit-Id: 64d33b6c-23e7-45bf-841d-88c1965795e7
	I0612 15:03:12.149807   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:12.149807   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:12.149807   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:12.151702   13752 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1936"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86624 chars]
	I0612 15:03:12.156052   13752 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace to be "Ready" ...
	I0612 15:03:12.156052   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:12.156052   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:12.156052   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:12.156052   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:12.156754   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:12.156754   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:12.156754   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:12.156754   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:12 GMT
	I0612 15:03:12.156754   13752 round_trippers.go:580]     Audit-Id: f8be8a83-c4fe-4a11-b0e7-b147af69d3ca
	I0612 15:03:12.156754   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:12.156754   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:12.156754   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:12.159990   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:12.160726   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:12.160726   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:12.160726   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:12.160790   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:12.161565   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:12.161565   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:12.161565   13752 round_trippers.go:580]     Audit-Id: 06eff638-34b1-4a22-88e3-b285e1cc1b1b
	I0612 15:03:12.161565   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:12.161565   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:12.161565   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:12.161565   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:12.161565   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:12 GMT
	I0612 15:03:12.161565   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1935","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0612 15:03:12.664422   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:12.664422   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:12.664422   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:12.664422   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:12.664978   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:12.668841   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:12.668841   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:12 GMT
	I0612 15:03:12.668841   13752 round_trippers.go:580]     Audit-Id: 11837902-b391-4790-abbe-09aec8047599
	I0612 15:03:12.668841   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:12.668841   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:12.668841   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:12.668841   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:12.668841   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:12.669683   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:12.669683   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:12.669683   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:12.669683   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:12.673477   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:03:12.673538   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:12.673538   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:12.673538   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:12.673538   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:12.673615   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:12 GMT
	I0612 15:03:12.673615   13752 round_trippers.go:580]     Audit-Id: 3e1aae8e-ac8f-4dde-902f-5521941b4889
	I0612 15:03:12.673615   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:12.673851   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1935","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0612 15:03:13.159778   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:13.159880   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:13.159880   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:13.159880   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:13.160296   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:13.160296   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:13.164797   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:13.164797   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:13.164797   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:13.164797   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:13 GMT
	I0612 15:03:13.164797   13752 round_trippers.go:580]     Audit-Id: 76e9580d-960e-483f-aeb1-8cc53ead643d
	I0612 15:03:13.164797   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:13.165382   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:13.166173   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:13.166222   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:13.166222   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:13.166222   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:13.166837   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:13.166837   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:13.166837   13752 round_trippers.go:580]     Audit-Id: c8d4b69b-f9e5-486f-9f7a-9bff81a0e1a6
	I0612 15:03:13.166837   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:13.166837   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:13.170925   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:13.170925   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:13.170925   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:13 GMT
	I0612 15:03:13.171378   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1935","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0612 15:03:13.666808   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:13.666877   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:13.666910   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:13.666910   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:13.667751   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:13.667751   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:13.667751   13752 round_trippers.go:580]     Audit-Id: c24e6de7-d0fa-4172-826b-4ffd3b6b1188
	I0612 15:03:13.667751   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:13.667751   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:13.667751   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:13.667751   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:13.667751   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:13 GMT
	I0612 15:03:13.671558   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:13.672541   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:13.672541   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:13.672541   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:13.672541   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:13.674416   13752 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 15:03:13.674416   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:13.676509   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:13.676509   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:13.676509   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:13.676877   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:13 GMT
	I0612 15:03:13.676977   13752 round_trippers.go:580]     Audit-Id: 95ba785b-a993-476e-845f-20f496642aa5
	I0612 15:03:13.676977   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:13.677337   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1935","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0612 15:03:14.171254   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:14.171318   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:14.171318   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:14.171318   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:14.171676   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:14.171676   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:14.171676   13752 round_trippers.go:580]     Audit-Id: 1e0b47c8-e502-472f-bd00-03832c49d99a
	I0612 15:03:14.171676   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:14.171676   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:14.171676   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:14.174814   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:14.174814   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:14 GMT
	I0612 15:03:14.175082   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:14.175984   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:14.176073   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:14.176073   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:14.176073   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:14.187090   13752 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0612 15:03:14.188690   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:14.188690   13752 round_trippers.go:580]     Audit-Id: 65486f55-678d-4969-b816-e5fd9f9ee245
	I0612 15:03:14.188690   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:14.188690   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:14.188690   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:14.188690   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:14.188690   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:14 GMT
	I0612 15:03:14.189172   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:14.189377   13752 pod_ready.go:102] pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace has status "Ready":"False"
	I0612 15:03:14.666257   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:14.666257   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:14.666257   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:14.666257   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:14.671209   13752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 15:03:14.671209   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:14.671209   13752 round_trippers.go:580]     Audit-Id: e4f38913-e3cf-46e6-b6d6-7fa966c9f863
	I0612 15:03:14.671209   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:14.671209   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:14.671209   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:14.671209   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:14.671209   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:14 GMT
	I0612 15:03:14.671209   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:14.672518   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:14.672518   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:14.672698   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:14.672698   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:14.672829   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:14.672829   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:14.675774   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:14.675774   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:14.675774   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:14.675774   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:14.675774   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:14 GMT
	I0612 15:03:14.675774   13752 round_trippers.go:580]     Audit-Id: 89da0cc2-d633-4620-843c-bc41adf0c7f2
	I0612 15:03:14.676153   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:15.162487   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:15.162487   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:15.162487   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:15.162487   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:15.163038   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:15.166486   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:15.166486   13752 round_trippers.go:580]     Audit-Id: b8296239-7a61-4367-a957-9347881e7348
	I0612 15:03:15.166486   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:15.166486   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:15.166486   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:15.166486   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:15.166486   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:15 GMT
	I0612 15:03:15.167057   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:15.167641   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:15.167641   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:15.167641   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:15.167641   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:15.168524   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:15.170924   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:15.170924   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:15.170924   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:15.171008   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:15 GMT
	I0612 15:03:15.171008   13752 round_trippers.go:580]     Audit-Id: 4bcac12e-945c-4901-ae4d-36f310b49853
	I0612 15:03:15.171081   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:15.171081   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:15.171399   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:15.659773   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:15.659856   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:15.659856   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:15.659856   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:15.662234   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:03:15.662234   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:15.662234   13752 round_trippers.go:580]     Audit-Id: dfe7942a-7d12-465d-9c14-47e97ecc8463
	I0612 15:03:15.662234   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:15.662234   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:15.662234   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:15.662234   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:15.662234   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:15 GMT
	I0612 15:03:15.662234   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:15.664704   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:15.664771   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:15.664771   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:15.664771   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:15.665050   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:15.667362   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:15.667362   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:15.667362   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:15 GMT
	I0612 15:03:15.667362   13752 round_trippers.go:580]     Audit-Id: 0521ac5c-8d08-411f-884f-92d9706f440c
	I0612 15:03:15.667362   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:15.667362   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:15.667438   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:15.667849   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:16.163461   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:16.163758   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:16.163758   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:16.163758   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:16.164108   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:16.164108   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:16.164108   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:16.164108   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:16.167888   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:16.167888   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:16.167888   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:16 GMT
	I0612 15:03:16.167888   13752 round_trippers.go:580]     Audit-Id: c39c8311-e347-47bd-9285-c2e7e1cc29ba
	I0612 15:03:16.168103   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:16.169046   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:16.169112   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:16.169112   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:16.169112   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:16.169392   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:16.172427   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:16.172427   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:16 GMT
	I0612 15:03:16.172427   13752 round_trippers.go:580]     Audit-Id: 2a7b428e-24c6-4724-b190-cde9bdceec6a
	I0612 15:03:16.172427   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:16.172427   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:16.172427   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:16.172427   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:16.173810   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:16.670654   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:16.670654   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:16.670926   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:16.670926   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:16.678184   13752 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 15:03:16.678184   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:16.678184   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:16.678184   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:16.678184   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:16.678184   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:16 GMT
	I0612 15:03:16.678184   13752 round_trippers.go:580]     Audit-Id: 65c385b4-9a36-4448-bb81-66c3a1819fe8
	I0612 15:03:16.678184   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:16.678818   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:16.679532   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:16.679532   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:16.679532   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:16.679532   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:16.683291   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:03:16.683349   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:16.683349   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:16.683410   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:16.683410   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:16 GMT
	I0612 15:03:16.683410   13752 round_trippers.go:580]     Audit-Id: 939f7a8d-5c5b-4871-8220-ddddeb67fa1e
	I0612 15:03:16.683467   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:16.683467   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:16.683931   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:16.684300   13752 pod_ready.go:102] pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace has status "Ready":"False"
	I0612 15:03:17.163762   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:17.163892   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:17.163892   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:17.163892   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:17.167949   13752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 15:03:17.167949   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:17.167949   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:17.167949   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:17.167949   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:17.167949   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:17.167949   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:17 GMT
	I0612 15:03:17.167949   13752 round_trippers.go:580]     Audit-Id: 88629e13-4237-4c44-bd1f-a55d4962dd32
	I0612 15:03:17.167949   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:17.169057   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:17.169057   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:17.169057   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:17.169132   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:17.171964   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:03:17.171964   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:17.171964   13752 round_trippers.go:580]     Audit-Id: 1be9e3ce-e218-43a6-9f31-17694a81e20e
	I0612 15:03:17.171964   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:17.171964   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:17.171964   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:17.171964   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:17.171964   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:17 GMT
	I0612 15:03:17.171964   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:17.670219   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:17.670302   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:17.670302   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:17.670459   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:17.670700   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:17.670700   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:17.670700   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:17.670700   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:17 GMT
	I0612 15:03:17.670700   13752 round_trippers.go:580]     Audit-Id: 7d0e7771-3c23-484b-b212-7ff0e24def33
	I0612 15:03:17.670700   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:17.670700   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:17.674406   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:17.675249   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:17.677336   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:17.677336   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:17.677422   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:17.677422   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:17.684537   13752 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 15:03:17.684537   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:17.684537   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:17.684537   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:17.684537   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:17.684537   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:17.684537   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:17 GMT
	I0612 15:03:17.684537   13752 round_trippers.go:580]     Audit-Id: 477195ac-c4fe-41d8-9b86-b449755e984f
	I0612 15:03:17.685585   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:18.165180   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:18.165276   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:18.165276   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:18.165276   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:18.169231   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:18.169231   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:18.169231   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:18.169346   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:18.169346   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:18.169346   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:18.169346   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:18 GMT
	I0612 15:03:18.169346   13752 round_trippers.go:580]     Audit-Id: 311e0ce8-5836-4c61-bcaf-c9ef5d2897ad
	I0612 15:03:18.169582   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:18.170292   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:18.170292   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:18.170292   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:18.170292   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:18.170855   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:18.173907   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:18.173907   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:18.173907   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:18.173907   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:18.173907   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:18.173907   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:18 GMT
	I0612 15:03:18.173907   13752 round_trippers.go:580]     Audit-Id: d5fdca7b-f0a2-4796-8709-f4bb0506b8d8
	I0612 15:03:18.174455   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:18.671584   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:18.671675   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:18.671675   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:18.671675   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:18.671930   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:18.671930   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:18.671930   13752 round_trippers.go:580]     Audit-Id: 158be6d4-5c48-4cd9-90bb-1259dff2d35f
	I0612 15:03:18.671930   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:18.671930   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:18.671930   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:18.671930   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:18.671930   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:18 GMT
	I0612 15:03:18.676154   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:18.677304   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:18.677360   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:18.677360   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:18.677360   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:18.677554   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:18.677554   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:18.677554   13752 round_trippers.go:580]     Audit-Id: 1a86bd1c-1e8e-4e78-a854-fcc0e8788c07
	I0612 15:03:18.677554   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:18.677554   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:18.680339   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:18.680339   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:18.680339   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:18 GMT
	I0612 15:03:18.680696   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:19.158044   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:19.158044   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:19.158044   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:19.158044   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:19.162516   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:19.162516   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:19.162516   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:19.162516   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:19.162516   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:19.162516   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:19 GMT
	I0612 15:03:19.162595   13752 round_trippers.go:580]     Audit-Id: 88356845-5e18-4403-ac28-c741178182a7
	I0612 15:03:19.162595   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:19.162706   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:19.163676   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:19.163676   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:19.163676   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:19.163676   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:19.164026   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:19.164026   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:19.164026   13752 round_trippers.go:580]     Audit-Id: 539fb5bd-84d8-4eb3-9acd-9cc34f4b056c
	I0612 15:03:19.164026   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:19.166992   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:19.166992   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:19.166992   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:19.166992   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:19 GMT
	I0612 15:03:19.167302   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:19.167332   13752 pod_ready.go:102] pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace has status "Ready":"False"
	I0612 15:03:19.667212   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:19.667212   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:19.667212   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:19.667212   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:19.667738   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:19.672267   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:19.672267   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:19 GMT
	I0612 15:03:19.672267   13752 round_trippers.go:580]     Audit-Id: 68701686-d4e9-45fa-ac30-7f2986401a96
	I0612 15:03:19.672267   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:19.672267   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:19.672267   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:19.672267   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:19.672504   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:19.673560   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:19.673560   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:19.673560   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:19.673560   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:19.674088   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:19.674088   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:19.677192   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:19.677192   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:19.677192   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:19 GMT
	I0612 15:03:19.677192   13752 round_trippers.go:580]     Audit-Id: 240f3bb9-d77e-4d6a-9696-36726d94d774
	I0612 15:03:19.677192   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:19.677192   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:19.677569   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:20.163409   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:20.163443   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:20.163443   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:20.163584   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:20.167553   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:03:20.168873   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:20.168932   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:20.168932   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:20.168932   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:20.168932   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:20.168932   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:20 GMT
	I0612 15:03:20.168932   13752 round_trippers.go:580]     Audit-Id: c4610272-61b4-42b0-93c7-b2a384060bf1
	I0612 15:03:20.169132   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:20.169768   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:20.169768   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:20.169768   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:20.169768   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:20.172036   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:03:20.172036   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:20.172036   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:20 GMT
	I0612 15:03:20.173278   13752 round_trippers.go:580]     Audit-Id: a47444cc-a5df-43b8-8871-4689e735a750
	I0612 15:03:20.173278   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:20.173278   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:20.173278   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:20.173278   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:20.173667   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:20.668594   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:20.668668   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:20.668668   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:20.668668   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:20.669512   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:20.669512   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:20.672966   13752 round_trippers.go:580]     Audit-Id: c35f4e14-79e2-4bcb-aa08-c972ebfa5829
	I0612 15:03:20.672966   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:20.672966   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:20.672966   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:20.672966   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:20.672966   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:20 GMT
	I0612 15:03:20.673190   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:20.673916   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:20.673916   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:20.673988   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:20.673988   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:20.674237   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:20.677211   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:20.677211   13752 round_trippers.go:580]     Audit-Id: 639e7809-8def-448d-9647-4504d3e489c0
	I0612 15:03:20.677211   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:20.677211   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:20.677211   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:20.677211   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:20.677211   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:20 GMT
	I0612 15:03:20.677504   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:21.163316   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:21.163316   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:21.163316   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:21.163577   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:21.163841   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:21.168377   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:21.168377   13752 round_trippers.go:580]     Audit-Id: 25d6f705-42cf-41a5-8dad-6a7e1444683a
	I0612 15:03:21.168377   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:21.168377   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:21.168453   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:21.168453   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:21.168453   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:21 GMT
	I0612 15:03:21.168678   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:21.169479   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:21.169479   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:21.169479   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:21.169479   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:21.169818   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:21.172190   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:21.172190   13752 round_trippers.go:580]     Audit-Id: cc8ca485-df90-483a-892c-7f62f30ae7ae
	I0612 15:03:21.172286   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:21.172286   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:21.172286   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:21.172286   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:21.172286   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:21 GMT
	I0612 15:03:21.172589   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:21.173340   13752 pod_ready.go:102] pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace has status "Ready":"False"
	I0612 15:03:21.656376   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:21.656665   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:21.656665   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:21.656665   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:21.657043   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:21.657043   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:21.657043   13752 round_trippers.go:580]     Audit-Id: 33236351-ac47-40af-af33-b76163628b9c
	I0612 15:03:21.657043   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:21.657043   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:21.657043   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:21.657043   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:21.660859   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:21 GMT
	I0612 15:03:21.660859   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:21.661995   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:21.662106   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:21.662106   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:21.662106   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:21.662283   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:21.665127   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:21.665127   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:21.665127   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:21.665193   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:21 GMT
	I0612 15:03:21.665193   13752 round_trippers.go:580]     Audit-Id: ebb79d47-d698-4304-9769-1dff97ae62b1
	I0612 15:03:21.665193   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:21.665193   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:21.665829   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:22.156792   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:22.156792   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:22.156792   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:22.156792   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:22.162689   13752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 15:03:22.162689   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:22.162689   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:22.162689   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:22.162689   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:22.162689   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:22 GMT
	I0612 15:03:22.162689   13752 round_trippers.go:580]     Audit-Id: 43dd6a54-f263-4c90-b38b-2a8fb59c2e9c
	I0612 15:03:22.162689   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:22.163338   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:22.164160   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:22.164160   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:22.164160   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:22.164160   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:22.166089   13752 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 15:03:22.166089   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:22.166089   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:22 GMT
	I0612 15:03:22.166089   13752 round_trippers.go:580]     Audit-Id: 62c9b18d-4347-4801-b260-96182709d048
	I0612 15:03:22.166089   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:22.166089   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:22.166089   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:22.166089   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:22.167723   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:22.671281   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:22.671510   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:22.671585   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:22.671585   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:22.675359   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:22.675416   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:22.675416   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:22.675416   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:22.675416   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:22.675416   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:22.675416   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:22 GMT
	I0612 15:03:22.675416   13752 round_trippers.go:580]     Audit-Id: 748f48c6-cee3-4999-a6d3-438c31138736
	I0612 15:03:22.675416   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:22.676152   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:22.676152   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:22.676152   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:22.676675   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:22.679533   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:03:22.679611   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:22.679611   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:22 GMT
	I0612 15:03:22.679611   13752 round_trippers.go:580]     Audit-Id: 4e4a5eb6-456e-4e7c-844f-e55fe98143fb
	I0612 15:03:22.679611   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:22.679611   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:22.679611   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:22.679611   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:22.679611   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:23.160365   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:23.160365   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:23.160365   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:23.160365   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:23.165044   13752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 15:03:23.165044   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:23.165044   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:23.165044   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:23.165044   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:23.165044   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:23.165044   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:23 GMT
	I0612 15:03:23.165044   13752 round_trippers.go:580]     Audit-Id: 732a59c7-9424-4eab-8c79-b22399b0e8f6
	I0612 15:03:23.165044   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:23.166337   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:23.166337   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:23.166437   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:23.166437   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:23.167065   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:23.167065   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:23.167065   13752 round_trippers.go:580]     Audit-Id: 3b35709e-90eb-429f-bd7b-dbe2f80ba5d5
	I0612 15:03:23.167065   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:23.167065   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:23.167065   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:23.167065   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:23.167065   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:23 GMT
	I0612 15:03:23.169795   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:23.664928   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:23.665186   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:23.665186   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:23.665186   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:23.668923   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:03:23.668923   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:23.668923   13752 round_trippers.go:580]     Audit-Id: 463fe866-f47a-40de-8951-fd8a90a654c2
	I0612 15:03:23.668923   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:23.668923   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:23.668923   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:23.668923   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:23.668923   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:23 GMT
	I0612 15:03:23.669206   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:23.670189   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:23.670220   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:23.670258   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:23.670258   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:23.676511   13752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 15:03:23.676511   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:23.676511   13752 round_trippers.go:580]     Audit-Id: f487fc2f-492d-44be-9469-ccb242c07bac
	I0612 15:03:23.676511   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:23.676511   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:23.676511   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:23.676511   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:23.676511   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:23 GMT
	I0612 15:03:23.676511   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:23.677254   13752 pod_ready.go:102] pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace has status "Ready":"False"
	I0612 15:03:24.161068   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:24.161068   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:24.161068   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:24.161068   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:24.161655   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:24.165566   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:24.165566   13752 round_trippers.go:580]     Audit-Id: 93499f6d-c31d-436b-a96a-cecf7fb494c8
	I0612 15:03:24.165566   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:24.165566   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:24.165566   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:24.165656   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:24.165656   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:24 GMT
	I0612 15:03:24.165974   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:24.166600   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:24.166600   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:24.166600   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:24.166600   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:24.167370   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:24.170282   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:24.170282   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:24.170282   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:24.170282   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:24.170282   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:24.170282   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:24 GMT
	I0612 15:03:24.170282   13752 round_trippers.go:580]     Audit-Id: f6c0cf04-16ea-466e-b6de-6104ea128202
	I0612 15:03:24.170651   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:24.668066   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:24.668162   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:24.668162   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:24.668221   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:24.672157   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:03:24.672157   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:24.672157   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:24.672157   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:24.672157   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:24.672157   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:24.672157   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:24 GMT
	I0612 15:03:24.672157   13752 round_trippers.go:580]     Audit-Id: b86a623b-9e8c-4f96-925d-6204ba1ff4f7
	I0612 15:03:24.672157   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:24.673297   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:24.673297   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:24.673297   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:24.673369   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:24.673530   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:24.673530   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:24.673530   13752 round_trippers.go:580]     Audit-Id: 87751d8e-bc3e-485e-8b93-124c084c39ad
	I0612 15:03:24.673530   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:24.673530   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:24.673530   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:24.673530   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:24.673530   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:24 GMT
	I0612 15:03:24.677113   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:25.171354   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:25.171491   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:25.171491   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:25.171491   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:25.172211   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:25.172211   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:25.175615   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:25.175615   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:25 GMT
	I0612 15:03:25.175615   13752 round_trippers.go:580]     Audit-Id: 11c9e43c-6ccb-4be8-bc41-4303d1dc378d
	I0612 15:03:25.175615   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:25.175615   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:25.175615   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:25.176058   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:25.177017   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:25.177087   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:25.177087   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:25.177087   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:25.177320   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:25.180616   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:25.180616   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:25 GMT
	I0612 15:03:25.180616   13752 round_trippers.go:580]     Audit-Id: 940139de-b51c-4be3-9dcd-f2c5ce3b2fa9
	I0612 15:03:25.180616   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:25.180616   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:25.180616   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:25.180616   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:25.180912   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:25.666958   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:25.667124   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:25.667124   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:25.667124   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:25.670726   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:03:25.670726   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:25.670726   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:25.670726   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:25.670726   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:25.670726   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:25 GMT
	I0612 15:03:25.670726   13752 round_trippers.go:580]     Audit-Id: 09ec9c74-030b-49b4-a669-801c14f9202e
	I0612 15:03:25.670726   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:25.671556   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:25.672302   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:25.672302   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:25.672302   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:25.672302   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:25.672549   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:25.672549   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:25.674829   13752 round_trippers.go:580]     Audit-Id: 3800fa11-bd60-4376-a2b4-e0430795f986
	I0612 15:03:25.674829   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:25.674829   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:25.674829   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:25.674829   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:25.674829   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:25 GMT
	I0612 15:03:25.675231   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:26.157801   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:26.157892   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:26.157892   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:26.157892   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:26.158464   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:26.162372   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:26.162372   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:26 GMT
	I0612 15:03:26.162372   13752 round_trippers.go:580]     Audit-Id: fc04c4f1-41f7-4578-bbf7-88e5f1c798e3
	I0612 15:03:26.162372   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:26.162372   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:26.162372   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:26.162372   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:26.162850   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:26.163585   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:26.163660   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:26.163660   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:26.163660   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:26.165620   13752 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 15:03:26.165620   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:26.165620   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:26.167458   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:26.167458   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:26.167458   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:26 GMT
	I0612 15:03:26.167458   13752 round_trippers.go:580]     Audit-Id: f7a9c7c3-3887-4efa-8b9d-901a493b6a0c
	I0612 15:03:26.167458   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:26.167653   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:26.168173   13752 pod_ready.go:102] pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace has status "Ready":"False"
	I0612 15:03:26.668756   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:26.669005   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:26.669005   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:26.669005   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:26.669383   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:26.673035   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:26.673035   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:26.673035   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:26.673035   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:26 GMT
	I0612 15:03:26.673035   13752 round_trippers.go:580]     Audit-Id: bcd2a625-e4bb-4ace-b624-ac972672ce5d
	I0612 15:03:26.673035   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:26.673035   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:26.673421   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:26.674370   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:26.674370   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:26.674370   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:26.674370   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:26.674707   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:26.674707   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:26.674707   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:26 GMT
	I0612 15:03:26.674707   13752 round_trippers.go:580]     Audit-Id: c3840da4-f77d-40d0-ab64-4cecc86047d7
	I0612 15:03:26.674707   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:26.674707   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:26.674707   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:26.674707   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:26.677275   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:27.168209   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:27.168344   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:27.168344   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:27.168454   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:27.169191   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:27.173151   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:27.173151   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:27 GMT
	I0612 15:03:27.173151   13752 round_trippers.go:580]     Audit-Id: 49d9ec69-5ee7-48f6-997d-64f4c3aff8d7
	I0612 15:03:27.173151   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:27.173151   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:27.173151   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:27.173151   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:27.173413   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:27.174230   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:27.174345   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:27.174345   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:27.174345   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:27.177130   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:03:27.177130   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:27.177130   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:27.177130   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:27.178071   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:27.178071   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:27.178071   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:27 GMT
	I0612 15:03:27.178071   13752 round_trippers.go:580]     Audit-Id: b13d1e26-627f-49c0-b39e-2f7f8b3e9e4b
	I0612 15:03:27.178388   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:27.660792   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:27.660792   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:27.660792   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:27.660792   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:27.661489   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:27.664871   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:27.664871   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:27.664871   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:27.664871   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:27.664871   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:27 GMT
	I0612 15:03:27.665144   13752 round_trippers.go:580]     Audit-Id: bc533479-0f16-4d7b-808d-0e351b817555
	I0612 15:03:27.665144   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:27.665281   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:27.666461   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:27.666638   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:27.666638   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:27.666638   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:27.671788   13752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 15:03:27.671788   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:27.671788   13752 round_trippers.go:580]     Audit-Id: 4c621081-2e41-4e0d-95f7-4a473694b82b
	I0612 15:03:27.671788   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:27.671788   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:27.671788   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:27.671788   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:27.671788   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:27 GMT
	I0612 15:03:27.671788   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:28.168931   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:28.169047   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:28.169129   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:28.169129   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:28.169865   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:28.169865   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:28.169865   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:28.169865   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:28.169865   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:28 GMT
	I0612 15:03:28.169865   13752 round_trippers.go:580]     Audit-Id: faa7f381-8508-41db-9697-7f19237c56c5
	I0612 15:03:28.169865   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:28.169865   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:28.174441   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:28.175301   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:28.175371   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:28.175371   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:28.175371   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:28.175606   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:28.175606   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:28.175606   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:28.175606   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:28.178802   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:28 GMT
	I0612 15:03:28.178802   13752 round_trippers.go:580]     Audit-Id: 4c06d460-a07e-4aa8-baf3-fc9841843b78
	I0612 15:03:28.178802   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:28.178802   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:28.178802   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:28.179417   13752 pod_ready.go:102] pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace has status "Ready":"False"
	I0612 15:03:28.671056   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:28.671056   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:28.671327   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:28.671327   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:28.671596   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:28.671596   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:28.675681   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:28.675681   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:28 GMT
	I0612 15:03:28.675681   13752 round_trippers.go:580]     Audit-Id: a2d6bf8c-d54f-40f6-b8bd-61f69cc97cf4
	I0612 15:03:28.675681   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:28.675681   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:28.675681   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:28.675759   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:28.676677   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:28.676677   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:28.676769   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:28.676769   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:28.676977   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:28.676977   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:28.676977   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:28.676977   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:28.676977   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:28.676977   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:28.680132   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:28 GMT
	I0612 15:03:28.680132   13752 round_trippers.go:580]     Audit-Id: d34b468f-28d3-422b-b6fd-56a813b2aa38
	I0612 15:03:28.680228   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:29.167076   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:29.167076   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:29.167076   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:29.167076   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:29.171779   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:29.171839   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:29.171839   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:29 GMT
	I0612 15:03:29.171839   13752 round_trippers.go:580]     Audit-Id: ac5fbbcf-02ad-44c9-9f3b-e393757b25da
	I0612 15:03:29.171839   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:29.171839   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:29.171839   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:29.171839   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:29.171839   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:29.172626   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:29.173167   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:29.173167   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:29.173167   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:29.176560   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:29.176628   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:29.176628   13752 round_trippers.go:580]     Audit-Id: d770ba98-8506-4a7c-aeea-e0dc5c1c146e
	I0612 15:03:29.176628   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:29.176747   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:29.176747   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:29.176747   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:29.176747   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:29 GMT
	I0612 15:03:29.176747   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:29.667736   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:29.668079   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:29.668079   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:29.668079   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:29.672389   13752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 15:03:29.672389   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:29.672389   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:29.672389   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:29.672389   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:29.672389   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:29 GMT
	I0612 15:03:29.672389   13752 round_trippers.go:580]     Audit-Id: e0e16183-9ca2-424a-b002-96d6c958f2e6
	I0612 15:03:29.672389   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:29.672389   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:29.673549   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:29.673549   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:29.673549   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:29.673549   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:29.674393   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:29.676452   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:29.676452   13752 round_trippers.go:580]     Audit-Id: 23e7d650-7230-4392-981e-c3020661e263
	I0612 15:03:29.676452   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:29.676452   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:29.676527   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:29.676527   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:29.676527   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:29 GMT
	I0612 15:03:29.676769   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:30.166822   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:30.167144   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:30.167144   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:30.167144   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:30.167543   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:30.170838   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:30.170838   13752 round_trippers.go:580]     Audit-Id: 694b4929-d08c-4859-914a-d3243f0eccd8
	I0612 15:03:30.170838   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:30.170838   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:30.170838   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:30.170838   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:30.170838   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:30 GMT
	I0612 15:03:30.171098   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:30.171997   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:30.171997   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:30.171997   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:30.172093   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:30.172342   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:30.175285   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:30.175285   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:30.175285   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:30.175285   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:30.175285   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:30 GMT
	I0612 15:03:30.175285   13752 round_trippers.go:580]     Audit-Id: f81f2098-4ee9-4bef-8315-d50f827a543a
	I0612 15:03:30.175285   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:30.175567   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:30.664442   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:30.664442   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:30.664541   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:30.664541   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:30.664767   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:30.668873   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:30.668873   13752 round_trippers.go:580]     Audit-Id: b413e64c-e534-41f0-a35f-a9b0ea692654
	I0612 15:03:30.668873   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:30.668873   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:30.668873   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:30.668873   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:30.668873   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:30 GMT
	I0612 15:03:30.669620   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:30.670397   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:30.670397   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:30.670397   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:30.670397   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:30.676666   13752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 15:03:30.676666   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:30.676666   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:30 GMT
	I0612 15:03:30.676666   13752 round_trippers.go:580]     Audit-Id: de26cfac-4cc3-492f-b156-47f572324349
	I0612 15:03:30.676666   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:30.676666   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:30.676666   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:30.676666   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:30.676666   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:30.677604   13752 pod_ready.go:102] pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace has status "Ready":"False"
	I0612 15:03:31.160016   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:31.160016   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:31.160016   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:31.160016   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:31.160554   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:31.164418   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:31.164418   13752 round_trippers.go:580]     Audit-Id: e715e0cf-29b7-4377-9cff-ae02dd5bde9c
	I0612 15:03:31.164418   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:31.164418   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:31.164418   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:31.164418   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:31.164418   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:31 GMT
	I0612 15:03:31.164418   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:31.165562   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:31.165657   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:31.165657   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:31.165657   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:31.166475   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:31.166475   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:31.166475   13752 round_trippers.go:580]     Audit-Id: d7a61c21-4bd0-4823-a58d-b23911294f55
	I0612 15:03:31.166475   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:31.166475   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:31.166475   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:31.166475   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:31.168620   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:31 GMT
	I0612 15:03:31.169052   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:31.661261   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:31.661261   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:31.661261   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:31.661261   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:31.666232   13752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 15:03:31.666232   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:31.666232   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:31.666232   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:31.666232   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:31.666232   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:31.666232   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:31 GMT
	I0612 15:03:31.666232   13752 round_trippers.go:580]     Audit-Id: 110ac12a-7d7d-444f-a8af-051c7e5b2bb5
	I0612 15:03:31.666232   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:31.667307   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:31.667380   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:31.667380   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:31.667380   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:31.667605   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:31.667605   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:31.667605   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:31.667605   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:31.671283   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:31.671283   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:31.671283   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:31 GMT
	I0612 15:03:31.671283   13752 round_trippers.go:580]     Audit-Id: f3593e29-483a-403a-99e6-aa0d08ce3460
	I0612 15:03:31.671578   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:32.162517   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:32.162517   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:32.162517   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:32.162517   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:32.169807   13752 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 15:03:32.169807   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:32.169807   13752 round_trippers.go:580]     Audit-Id: 5a76526a-f78e-4dca-b93d-372476ca3459
	I0612 15:03:32.169807   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:32.169807   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:32.169807   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:32.169807   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:32.169807   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:32 GMT
	I0612 15:03:32.169807   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:32.170520   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:32.170520   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:32.170520   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:32.170520   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:32.174112   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:03:32.174112   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:32.174112   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:32.174112   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:32 GMT
	I0612 15:03:32.174112   13752 round_trippers.go:580]     Audit-Id: af3ee3c8-921a-4739-a920-a87e2a810232
	I0612 15:03:32.174112   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:32.174112   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:32.174112   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:32.175259   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:32.667728   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:32.667728   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:32.667728   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:32.667728   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:32.668280   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:32.672371   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:32.672371   13752 round_trippers.go:580]     Audit-Id: c6f0b530-4a18-4e17-ab4d-b154d65a5c76
	I0612 15:03:32.672371   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:32.672371   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:32.672371   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:32.672371   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:32.672371   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:32 GMT
	I0612 15:03:32.672733   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:32.672869   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:32.673415   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:32.673415   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:32.673415   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:32.676109   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:03:32.676109   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:32.676109   13752 round_trippers.go:580]     Audit-Id: 4bb42155-6b90-494f-8aeb-1d78c67a8b1c
	I0612 15:03:32.676109   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:32.676109   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:32.676109   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:32.676109   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:32.676109   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:32 GMT
	I0612 15:03:32.676109   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:33.162399   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:33.162399   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:33.162642   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:33.162642   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:33.162953   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:33.166722   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:33.166722   13752 round_trippers.go:580]     Audit-Id: ab2525dc-7384-400e-907f-b2310b507413
	I0612 15:03:33.166722   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:33.166722   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:33.166722   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:33.166722   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:33.166722   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:33 GMT
	I0612 15:03:33.166894   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:33.167602   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:33.167705   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:33.167705   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:33.167705   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:33.168431   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:33.168431   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:33.168431   13752 round_trippers.go:580]     Audit-Id: 22a59fb6-9e12-4cac-83b4-38c00b4f1caf
	I0612 15:03:33.168431   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:33.168431   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:33.170997   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:33.170997   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:33.170997   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:33 GMT
	I0612 15:03:33.171164   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:33.171605   13752 pod_ready.go:102] pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace has status "Ready":"False"
	I0612 15:03:33.662817   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:33.663061   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:33.663061   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:33.663061   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:33.667430   13752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 15:03:33.667430   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:33.667533   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:33 GMT
	I0612 15:03:33.667533   13752 round_trippers.go:580]     Audit-Id: 17e14e45-799f-4791-bf4e-894761c5907a
	I0612 15:03:33.667533   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:33.667533   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:33.667533   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:33.667533   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:33.667913   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:33.668661   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:33.668661   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:33.668661   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:33.668661   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:33.672196   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:03:33.672196   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:33.672196   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:33.672310   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:33 GMT
	I0612 15:03:33.672310   13752 round_trippers.go:580]     Audit-Id: e15f3616-0b68-48be-95d2-c8a925c3ca63
	I0612 15:03:33.672310   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:33.672310   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:33.672310   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:33.672310   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:34.166108   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:34.166193   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:34.166193   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:34.166193   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:34.166631   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:34.166631   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:34.166631   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:34.170700   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:34.170700   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:34.170700   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:34 GMT
	I0612 15:03:34.170700   13752 round_trippers.go:580]     Audit-Id: e0f9593a-25b5-4d23-aca0-04f229d01366
	I0612 15:03:34.170764   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:34.170764   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:34.171558   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:34.171558   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:34.172083   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:34.172083   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:34.173038   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:34.173038   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:34.173038   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:34.173038   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:34.173038   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:34 GMT
	I0612 15:03:34.175557   13752 round_trippers.go:580]     Audit-Id: f7732646-23ec-4876-97eb-3274c097813c
	I0612 15:03:34.175557   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:34.175557   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:34.175816   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:34.671090   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:34.671322   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:34.671388   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:34.671388   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:34.672211   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:34.672211   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:34.672211   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:34 GMT
	I0612 15:03:34.672211   13752 round_trippers.go:580]     Audit-Id: dc56804b-3732-4dcb-a2b4-687357475b3f
	I0612 15:03:34.672211   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:34.672211   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:34.672211   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:34.675789   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:34.676077   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:34.676868   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:34.676940   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:34.676940   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:34.676940   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:34.677217   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:34.677217   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:34.677217   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:34.677217   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:34.677217   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:34.677217   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:34 GMT
	I0612 15:03:34.677217   13752 round_trippers.go:580]     Audit-Id: 199db18b-d3bd-4b47-b728-480bf6a2aa33
	I0612 15:03:34.677217   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:34.680346   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:35.163175   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:35.163332   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:35.163380   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:35.163380   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:35.168958   13752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 15:03:35.170213   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:35.170213   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:35.170213   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:35.170213   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:35.170213   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:35.170213   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:35 GMT
	I0612 15:03:35.170213   13752 round_trippers.go:580]     Audit-Id: 5b39cf61-2da0-41a4-b851-b18b39a16cfa
	I0612 15:03:35.170404   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:35.171299   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:35.171299   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:35.171299   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:35.171299   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:35.171549   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:35.171549   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:35.171549   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:35.174082   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:35.174082   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:35 GMT
	I0612 15:03:35.174082   13752 round_trippers.go:580]     Audit-Id: 95fe8234-6341-4f69-9815-d6e98a9d2745
	I0612 15:03:35.174082   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:35.174082   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:35.174082   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:35.174866   13752 pod_ready.go:102] pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace has status "Ready":"False"
	I0612 15:03:35.671898   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:35.671973   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:35.672001   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:35.672001   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:35.676163   13752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 15:03:35.676163   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:35.676163   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:35 GMT
	I0612 15:03:35.676163   13752 round_trippers.go:580]     Audit-Id: 22db17fa-155d-453b-8b07-f5a4d24dac30
	I0612 15:03:35.676163   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:35.676163   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:35.676163   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:35.676163   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:35.677047   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:35.677808   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:35.677808   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:35.677808   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:35.677808   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:35.678585   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:35.678585   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:35.678585   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:35.678585   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:35 GMT
	I0612 15:03:35.678585   13752 round_trippers.go:580]     Audit-Id: ef861691-e177-4246-9077-55910de0c84f
	I0612 15:03:35.678585   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:35.678585   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:35.678585   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:35.681284   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:36.168520   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:36.168622   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:36.168656   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:36.168656   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:36.169557   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:36.173070   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:36.173114   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:36.173114   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:36 GMT
	I0612 15:03:36.173114   13752 round_trippers.go:580]     Audit-Id: 2c5d9972-7efd-4648-948a-2efb0385b346
	I0612 15:03:36.173114   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:36.173114   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:36.173114   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:36.173367   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:36.174321   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:36.174321   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:36.174321   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:36.174321   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:36.175573   13752 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 15:03:36.175573   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:36.175573   13752 round_trippers.go:580]     Audit-Id: 2d0dc48c-40ec-4624-98d9-01a07cfffc4a
	I0612 15:03:36.175573   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:36.177169   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:36.177169   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:36.177169   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:36.177169   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:36 GMT
	I0612 15:03:36.177614   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:36.672850   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:36.673104   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:36.673104   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:36.673104   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:36.678512   13752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 15:03:36.678512   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:36.678512   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:36 GMT
	I0612 15:03:36.678512   13752 round_trippers.go:580]     Audit-Id: bff89353-8d88-42f5-b65f-29c64d596196
	I0612 15:03:36.678512   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:36.678512   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:36.678512   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:36.678512   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:36.679387   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:36.680271   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:36.680315   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:36.680315   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:36.680315   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:36.687672   13752 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 15:03:36.687672   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:36.687891   13752 round_trippers.go:580]     Audit-Id: 18d95d95-fdf3-4519-b29d-af9c6d701622
	I0612 15:03:36.687891   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:36.687891   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:36.687891   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:36.687891   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:36.687891   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:36 GMT
	I0612 15:03:36.687891   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:37.166716   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:37.166949   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:37.166949   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:37.166949   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:37.171371   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:37.171371   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:37.171371   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:37.171371   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:37.171371   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:37.171478   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:37 GMT
	I0612 15:03:37.171478   13752 round_trippers.go:580]     Audit-Id: b1f35ec0-7aa8-43c2-b471-de673d559313
	I0612 15:03:37.171478   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:37.171634   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:37.172469   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:37.172469   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:37.172557   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:37.172557   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:37.172999   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:37.175088   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:37.175088   13752 round_trippers.go:580]     Audit-Id: 09c0339d-397e-43c7-a0ca-bb7484a112de
	I0612 15:03:37.175088   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:37.175088   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:37.175088   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:37.175088   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:37.175088   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:37 GMT
	I0612 15:03:37.175603   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:37.176144   13752 pod_ready.go:102] pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace has status "Ready":"False"
	I0612 15:03:37.661218   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:37.661218   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:37.661218   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:37.661218   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:37.662019   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:37.666220   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:37.666220   13752 round_trippers.go:580]     Audit-Id: 12526e9b-a403-4fc0-a0eb-c834dfe65931
	I0612 15:03:37.666220   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:37.666220   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:37.666220   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:37.666220   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:37.666220   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:37 GMT
	I0612 15:03:37.666361   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:37.667294   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:37.667350   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:37.667350   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:37.667350   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:37.667603   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:37.671004   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:37.671004   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:37.671004   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:37.671004   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:37.671004   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:37.671004   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:37 GMT
	I0612 15:03:37.671004   13752 round_trippers.go:580]     Audit-Id: add93adc-543b-4e99-b0f5-6a8b83dd9038
	I0612 15:03:37.671304   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:38.168882   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:38.168882   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:38.168882   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:38.168882   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:38.175785   13752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 15:03:38.175785   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:38.175918   13752 round_trippers.go:580]     Audit-Id: 91c58ae4-1dd8-48d7-9b0a-bfaa5a58ab78
	I0612 15:03:38.175918   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:38.175918   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:38.175918   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:38.175918   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:38.175918   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:38 GMT
	I0612 15:03:38.176170   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1975","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6790 chars]
	I0612 15:03:38.177199   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:38.177199   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:38.177304   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:38.177304   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:38.181021   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:03:38.181021   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:38.181021   13752 round_trippers.go:580]     Audit-Id: a8a4ed45-6f84-46de-8bf7-daa3f43c4e0c
	I0612 15:03:38.181196   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:38.181196   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:38.181196   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:38.181196   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:38.181196   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:38 GMT
	I0612 15:03:38.181356   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:38.182165   13752 pod_ready.go:92] pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace has status "Ready":"True"
	I0612 15:03:38.182251   13752 pod_ready.go:81] duration metric: took 26.0261131s for pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace to be "Ready" ...
	I0612 15:03:38.182251   13752 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:03:38.182399   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-025000
	I0612 15:03:38.182399   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:38.182399   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:38.182399   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:38.184688   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:38.184688   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:38.184688   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:38 GMT
	I0612 15:03:38.184688   13752 round_trippers.go:580]     Audit-Id: d099db0c-abaf-4bd9-ad98-9fd0791086dd
	I0612 15:03:38.184688   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:38.184688   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:38.184901   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:38.184901   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:38.184901   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-025000","namespace":"kube-system","uid":"be41c4a6-88ce-4e08-9b7c-16c0b4f3eec2","resourceVersion":"1875","creationTimestamp":"2024-06-12T22:02:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.200.184:2379","kubernetes.io/config.hash":"7b6b5637642f3d915c0db1461c7074e6","kubernetes.io/config.mirror":"7b6b5637642f3d915c0db1461c7074e6","kubernetes.io/config.seen":"2024-06-12T22:02:25.563300686Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6171 chars]
	I0612 15:03:38.185721   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:38.185721   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:38.185721   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:38.185721   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:38.187326   13752 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 15:03:38.187326   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:38.187326   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:38.187326   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:38.188320   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:38.188320   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:38.188320   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:38 GMT
	I0612 15:03:38.188320   13752 round_trippers.go:580]     Audit-Id: 831ae8ba-c4cd-48aa-a7f2-1efe4660d320
	I0612 15:03:38.188394   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:38.188394   13752 pod_ready.go:92] pod "etcd-multinode-025000" in "kube-system" namespace has status "Ready":"True"
	I0612 15:03:38.188394   13752 pod_ready.go:81] duration metric: took 6.143ms for pod "etcd-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:03:38.188942   13752 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:03:38.189097   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-025000
	I0612 15:03:38.189119   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:38.189119   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:38.189155   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:38.191467   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:03:38.192826   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:38.192826   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:38 GMT
	I0612 15:03:38.192826   13752 round_trippers.go:580]     Audit-Id: 0da92277-168f-40e1-ac80-ea72ae98a736
	I0612 15:03:38.192901   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:38.192901   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:38.192901   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:38.192901   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:38.192901   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-025000","namespace":"kube-system","uid":"63e55411-d432-4e5a-becc-fae0887fecae","resourceVersion":"1897","creationTimestamp":"2024-06-12T22:02:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.200.184:8443","kubernetes.io/config.hash":"d6071cd4356268889f798790dc93ce06","kubernetes.io/config.mirror":"d6071cd4356268889f798790dc93ce06","kubernetes.io/config.seen":"2024-06-12T22:02:25.478872091Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7705 chars]
	I0612 15:03:38.193548   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:38.193548   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:38.193548   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:38.193548   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:38.199802   13752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 15:03:38.199802   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:38.199887   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:38 GMT
	I0612 15:03:38.199887   13752 round_trippers.go:580]     Audit-Id: 9e2e23e7-7222-442d-bcf7-98ee76952a75
	I0612 15:03:38.199887   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:38.199887   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:38.199887   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:38.199887   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:38.199887   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:38.200459   13752 pod_ready.go:92] pod "kube-apiserver-multinode-025000" in "kube-system" namespace has status "Ready":"True"
	I0612 15:03:38.200565   13752 pod_ready.go:81] duration metric: took 11.6229ms for pod "kube-apiserver-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:03:38.200565   13752 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:03:38.200644   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-025000
	I0612 15:03:38.200644   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:38.200719   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:38.200719   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:38.203896   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:03:38.203896   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:38.203896   13752 round_trippers.go:580]     Audit-Id: f8136e95-ab81-4cfc-9502-ee69c96ac001
	I0612 15:03:38.203896   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:38.203896   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:38.203896   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:38.203896   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:38.203896   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:38 GMT
	I0612 15:03:38.203896   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-025000","namespace":"kube-system","uid":"68c9aa4f-49ee-439c-ad51-7943e65c0085","resourceVersion":"1895","creationTimestamp":"2024-06-12T21:39:30Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"88de11d8b1aaec126153d44e87c4b5dd","kubernetes.io/config.mirror":"88de11d8b1aaec126153d44e87c4b5dd","kubernetes.io/config.seen":"2024-06-12T21:39:23.999674614Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0612 15:03:38.205008   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:38.205073   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:38.205073   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:38.205073   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:38.206444   13752 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 15:03:38.207706   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:38.207706   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:38 GMT
	I0612 15:03:38.207706   13752 round_trippers.go:580]     Audit-Id: 671224cc-e5a0-44e9-842f-a707d363cf63
	I0612 15:03:38.207706   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:38.207706   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:38.207706   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:38.207759   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:38.208004   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:38.208004   13752 pod_ready.go:92] pod "kube-controller-manager-multinode-025000" in "kube-system" namespace has status "Ready":"True"
	I0612 15:03:38.208004   13752 pod_ready.go:81] duration metric: took 7.4382ms for pod "kube-controller-manager-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:03:38.208004   13752 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-47lr8" in "kube-system" namespace to be "Ready" ...
	I0612 15:03:38.208591   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-proxy-47lr8
	I0612 15:03:38.208636   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:38.208636   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:38.208636   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:38.209284   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:38.209284   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:38.211393   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:38.211393   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:38.211393   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:38.211437   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:38 GMT
	I0612 15:03:38.211437   13752 round_trippers.go:580]     Audit-Id: cb34be29-04ce-4a5b-b7d8-f47e54f40eb9
	I0612 15:03:38.211437   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:38.211588   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-47lr8","generateName":"kube-proxy-","namespace":"kube-system","uid":"10b24fa7-8eea-4fbb-ab18-404e853aa7ab","resourceVersion":"1793","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b44c21bc-e2cc-415b-bc2f-616adabe0681","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b44c21bc-e2cc-415b-bc2f-616adabe0681\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0612 15:03:38.211793   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:38.212407   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:38.212477   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:38.212477   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:38.215193   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:03:38.215193   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:38.215193   13752 round_trippers.go:580]     Audit-Id: 1ac66015-7962-4bde-832e-bd0d2a552f90
	I0612 15:03:38.215193   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:38.215193   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:38.215193   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:38.215193   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:38.215193   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:38 GMT
	I0612 15:03:38.215193   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:38.216034   13752 pod_ready.go:92] pod "kube-proxy-47lr8" in "kube-system" namespace has status "Ready":"True"
	I0612 15:03:38.216034   13752 pod_ready.go:81] duration metric: took 8.0304ms for pod "kube-proxy-47lr8" in "kube-system" namespace to be "Ready" ...
	I0612 15:03:38.216095   13752 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7jwdg" in "kube-system" namespace to be "Ready" ...
	I0612 15:03:38.369754   13752 request.go:629] Waited for 153.3034ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7jwdg
	I0612 15:03:38.369962   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7jwdg
	I0612 15:03:38.369962   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:38.369962   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:38.370080   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:38.373867   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:03:38.373867   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:38.373867   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:38.373867   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:38.373867   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:38.373867   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:38.373867   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:38 GMT
	I0612 15:03:38.373867   13752 round_trippers.go:580]     Audit-Id: 20d56aed-e8ac-4ea1-81c6-7eaa4818e6d1
	I0612 15:03:38.373867   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7jwdg","generateName":"kube-proxy-","namespace":"kube-system","uid":"643030f7-b876-4243-bacc-04205e88cc9e","resourceVersion":"1748","creationTimestamp":"2024-06-12T21:47:16Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b44c21bc-e2cc-415b-bc2f-616adabe0681","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:47:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b44c21bc-e2cc-415b-bc2f-616adabe0681\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6062 chars]
	I0612 15:03:38.571854   13752 request.go:629] Waited for 196.5255ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m03
	I0612 15:03:38.571933   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m03
	I0612 15:03:38.571933   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:38.572061   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:38.572139   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:38.575522   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:38.575522   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:38.575522   13752 round_trippers.go:580]     Audit-Id: e52c8ecb-c0d5-4696-878c-dbeef778a857
	I0612 15:03:38.575522   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:38.575522   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:38.575522   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:38.575522   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:38.575522   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:38 GMT
	I0612 15:03:38.576324   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m03","uid":"9d457bc2-c46f-4b5d-8023-5c06ef6198c7","resourceVersion":"1913","creationTimestamp":"2024-06-12T21:57:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_57_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:57:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4399 chars]
	I0612 15:03:38.576915   13752 pod_ready.go:97] node "multinode-025000-m03" hosting pod "kube-proxy-7jwdg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000-m03" has status "Ready":"Unknown"
	I0612 15:03:38.576915   13752 pod_ready.go:81] duration metric: took 360.8183ms for pod "kube-proxy-7jwdg" in "kube-system" namespace to be "Ready" ...
	E0612 15:03:38.576915   13752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-025000-m03" hosting pod "kube-proxy-7jwdg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000-m03" has status "Ready":"Unknown"
	I0612 15:03:38.576915   13752 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tdcdp" in "kube-system" namespace to be "Ready" ...
	I0612 15:03:38.777788   13752 request.go:629] Waited for 200.6726ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tdcdp
	I0612 15:03:38.777994   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tdcdp
	I0612 15:03:38.777994   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:38.777994   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:38.777994   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:38.778288   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:38.781905   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:38.781905   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:38 GMT
	I0612 15:03:38.781905   13752 round_trippers.go:580]     Audit-Id: f4550265-74f1-439c-862d-82804d0fd473
	I0612 15:03:38.781905   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:38.782000   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:38.782000   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:38.782000   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:38.782145   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tdcdp","generateName":"kube-proxy-","namespace":"kube-system","uid":"b623833c-ce55-46b1-a840-99b3143adac1","resourceVersion":"1958","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b44c21bc-e2cc-415b-bc2f-616adabe0681","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b44c21bc-e2cc-415b-bc2f-616adabe0681\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0612 15:03:38.980317   13752 request.go:629] Waited for 196.4946ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:03:38.980386   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:03:38.980386   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:38.980386   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:38.980386   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:38.984899   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:38.984966   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:38.984966   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:38.984966   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:38.984966   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:38.984966   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:38.984966   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:38 GMT
	I0612 15:03:38.984966   13752 round_trippers.go:580]     Audit-Id: 2b126299-3eea-4452-adcd-9bf93ba6f4a3
	I0612 15:03:38.984966   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"1963","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4583 chars]
	I0612 15:03:38.985740   13752 pod_ready.go:97] node "multinode-025000-m02" hosting pod "kube-proxy-tdcdp" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000-m02" has status "Ready":"Unknown"
	I0612 15:03:38.985740   13752 pod_ready.go:81] duration metric: took 408.8234ms for pod "kube-proxy-tdcdp" in "kube-system" namespace to be "Ready" ...
	E0612 15:03:38.985740   13752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-025000-m02" hosting pod "kube-proxy-tdcdp" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000-m02" has status "Ready":"Unknown"
	I0612 15:03:38.985740   13752 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:03:39.174622   13752 request.go:629] Waited for 188.6425ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-025000
	I0612 15:03:39.174717   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-025000
	I0612 15:03:39.174717   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:39.174717   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:39.174717   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:39.175262   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:39.175262   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:39.175262   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:39.178841   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:39.178899   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:39 GMT
	I0612 15:03:39.178899   13752 round_trippers.go:580]     Audit-Id: f93b5779-6b18-4a07-ab08-c9bdf4045d6a
	I0612 15:03:39.178899   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:39.178899   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:39.178899   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-025000","namespace":"kube-system","uid":"83b272cb-1286-47d8-bcb1-a66056dff2a5","resourceVersion":"1865","creationTimestamp":"2024-06-12T21:39:31Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"de62e7fd7d0feea82620e745032c1a67","kubernetes.io/config.mirror":"de62e7fd7d0feea82620e745032c1a67","kubernetes.io/config.seen":"2024-06-12T21:39:31.214466565Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0612 15:03:39.378032   13752 request.go:629] Waited for 198.2421ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:39.378433   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:39.378433   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:39.378433   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:39.378433   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:39.378810   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:39.378810   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:39.378810   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:39.382976   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:39.382976   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:39.382976   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:39.382976   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:39 GMT
	I0612 15:03:39.382976   13752 round_trippers.go:580]     Audit-Id: b2978915-00e9-4054-8ce5-53073014865e
	I0612 15:03:39.383106   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:39.383874   13752 pod_ready.go:92] pod "kube-scheduler-multinode-025000" in "kube-system" namespace has status "Ready":"True"
	I0612 15:03:39.383874   13752 pod_ready.go:81] duration metric: took 398.1329ms for pod "kube-scheduler-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:03:39.383943   13752 pod_ready.go:38] duration metric: took 27.2395096s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 15:03:39.383943   13752 api_server.go:52] waiting for apiserver process to appear ...
	I0612 15:03:39.393117   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0612 15:03:39.423915   13752 command_runner.go:130] > bbe2d2e51b5f
	I0612 15:03:39.428293   13752 logs.go:276] 1 containers: [bbe2d2e51b5f]
	I0612 15:03:39.437876   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0612 15:03:39.462543   13752 command_runner.go:130] > 6b61f5f6483d
	I0612 15:03:39.463172   13752 logs.go:276] 1 containers: [6b61f5f6483d]
	I0612 15:03:39.473204   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0612 15:03:39.497701   13752 command_runner.go:130] > 26e5daf354e3
	I0612 15:03:39.498780   13752 command_runner.go:130] > e83cf4eef49e
	I0612 15:03:39.498838   13752 logs.go:276] 2 containers: [26e5daf354e3 e83cf4eef49e]
	I0612 15:03:39.509299   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0612 15:03:39.533720   13752 command_runner.go:130] > 755750ecd1e3
	I0612 15:03:39.533720   13752 command_runner.go:130] > 6b021c195669
	I0612 15:03:39.535953   13752 logs.go:276] 2 containers: [755750ecd1e3 6b021c195669]
	I0612 15:03:39.546650   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0612 15:03:39.573376   13752 command_runner.go:130] > 227a905829b0
	I0612 15:03:39.573376   13752 command_runner.go:130] > c4842faba751
	I0612 15:03:39.573376   13752 logs.go:276] 2 containers: [227a905829b0 c4842faba751]
	I0612 15:03:39.581549   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0612 15:03:39.607553   13752 command_runner.go:130] > 7acc8ff0a931
	I0612 15:03:39.607553   13752 command_runner.go:130] > 685d167da53c
	I0612 15:03:39.607671   13752 logs.go:276] 2 containers: [7acc8ff0a931 685d167da53c]
	I0612 15:03:39.617593   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0612 15:03:39.646054   13752 command_runner.go:130] > cccfd1e9fef5
	I0612 15:03:39.646109   13752 command_runner.go:130] > 4d60d82f6bc5
	I0612 15:03:39.647036   13752 logs.go:276] 2 containers: [cccfd1e9fef5 4d60d82f6bc5]
	I0612 15:03:39.647083   13752 logs.go:123] Gathering logs for kube-controller-manager [7acc8ff0a931] ...
	I0612 15:03:39.647138   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7acc8ff0a931"
	I0612 15:03:39.670996   13752 command_runner.go:130] ! I0612 22:02:28.579013       1 serving.go:380] Generated self-signed cert in-memory
	I0612 15:03:39.670996   13752 command_runner.go:130] ! I0612 22:02:28.927149       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0612 15:03:39.670996   13752 command_runner.go:130] ! I0612 22:02:28.927184       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:39.674110   13752 command_runner.go:130] ! I0612 22:02:28.930688       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0612 15:03:39.674110   13752 command_runner.go:130] ! I0612 22:02:28.932993       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0612 15:03:39.674110   13752 command_runner.go:130] ! I0612 22:02:28.933167       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0612 15:03:39.674273   13752 command_runner.go:130] ! I0612 22:02:28.933539       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 15:03:39.674587   13752 command_runner.go:130] ! I0612 22:02:32.987820       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0612 15:03:39.675412   13752 command_runner.go:130] ! I0612 22:02:32.988653       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0612 15:03:39.675412   13752 command_runner.go:130] ! I0612 22:02:32.994458       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0612 15:03:39.675412   13752 command_runner.go:130] ! I0612 22:02:32.995780       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0612 15:03:39.675412   13752 command_runner.go:130] ! I0612 22:02:32.996873       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0612 15:03:39.675412   13752 command_runner.go:130] ! I0612 22:02:33.005703       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0612 15:03:39.675412   13752 command_runner.go:130] ! I0612 22:02:33.005720       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0612 15:03:39.675412   13752 command_runner.go:130] ! I0612 22:02:33.006099       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0612 15:03:39.675412   13752 command_runner.go:130] ! I0612 22:02:33.006120       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0612 15:03:39.675412   13752 command_runner.go:130] ! I0612 22:02:33.011328       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0612 15:03:39.675412   13752 command_runner.go:130] ! I0612 22:02:33.013199       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0612 15:03:39.675412   13752 command_runner.go:130] ! I0612 22:02:33.013216       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0612 15:03:39.675412   13752 command_runner.go:130] ! W0612 22:02:33.045760       1 shared_informer.go:597] resyncPeriod 19h21m1.650821539s is smaller than resyncCheckPeriod 23h18m38.368150047s and the informer has already started. Changing it to 23h18m38.368150047s
	I0612 15:03:39.675412   13752 command_runner.go:130] ! I0612 22:02:33.046400       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0612 15:03:39.675942   13752 command_runner.go:130] ! I0612 22:02:33.046742       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0612 15:03:39.675982   13752 command_runner.go:130] ! I0612 22:02:33.047003       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0612 15:03:39.675982   13752 command_runner.go:130] ! I0612 22:02:33.047066       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0612 15:03:39.675982   13752 command_runner.go:130] ! I0612 22:02:33.047091       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0612 15:03:39.675982   13752 command_runner.go:130] ! I0612 22:02:33.047150       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0612 15:03:39.675982   13752 command_runner.go:130] ! I0612 22:02:33.047175       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0612 15:03:39.676089   13752 command_runner.go:130] ! I0612 22:02:33.047875       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0612 15:03:39.676089   13752 command_runner.go:130] ! I0612 22:02:33.048961       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0612 15:03:39.676150   13752 command_runner.go:130] ! I0612 22:02:33.049070       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.049108       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.049132       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.049173       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.049188       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.049203       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.049218       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.049235       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.049307       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0612 15:03:39.676177   13752 command_runner.go:130] ! W0612 22:02:33.049318       1 shared_informer.go:597] resyncPeriod 16h27m54.164006095s is smaller than resyncCheckPeriod 23h18m38.368150047s and the informer has already started. Changing it to 23h18m38.368150047s
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.049536       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.049616       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.049652       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.049852       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.049880       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.052188       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.075270       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.088124       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.088224       1 shared_informer.go:313] Waiting for caches to sync for job
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.088312       1 shared_informer.go:320] Caches are synced for tokens
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.092469       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0612 15:03:39.676764   13752 command_runner.go:130] ! I0612 22:02:33.093016       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0612 15:03:39.676764   13752 command_runner.go:130] ! I0612 22:02:33.093183       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.099173       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.099288       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.099302       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.099269       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.099467       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.102279       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.103692       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.103797       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.109335       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.109737       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.109801       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.109811       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.113018       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.114442       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.114573       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.118932       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.118955       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.118979       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.119791       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.121411       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.119985       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.122332       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.122409       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0612 15:03:39.677454   13752 command_runner.go:130] ! I0612 22:02:33.122432       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:39.677454   13752 command_runner.go:130] ! I0612 22:02:33.122572       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0612 15:03:39.677674   13752 command_runner.go:130] ! I0612 22:02:33.122710       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0612 15:03:39.677674   13752 command_runner.go:130] ! I0612 22:02:33.122722       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0612 15:03:39.677674   13752 command_runner.go:130] ! I0612 22:02:33.122748       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:39.677674   13752 command_runner.go:130] ! I0612 22:02:33.132412       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0612 15:03:39.677674   13752 command_runner.go:130] ! I0612 22:02:33.132517       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0612 15:03:39.677674   13752 command_runner.go:130] ! I0612 22:02:33.132620       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0612 15:03:39.677674   13752 command_runner.go:130] ! I0612 22:02:33.132660       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0612 15:03:39.677674   13752 command_runner.go:130] ! I0612 22:02:33.132669       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0612 15:03:39.677674   13752 command_runner.go:130] ! I0612 22:02:33.139478       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0612 15:03:39.677674   13752 command_runner.go:130] ! I0612 22:02:33.139854       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0612 15:03:39.677674   13752 command_runner.go:130] ! I0612 22:02:33.140261       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0612 15:03:39.677674   13752 command_runner.go:130] ! I0612 22:02:33.169621       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0612 15:03:39.677674   13752 command_runner.go:130] ! I0612 22:02:33.169819       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0612 15:03:39.678228   13752 command_runner.go:130] ! I0612 22:02:33.169849       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0612 15:03:39.678228   13752 command_runner.go:130] ! I0612 22:02:33.170074       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0612 15:03:39.678287   13752 command_runner.go:130] ! I0612 22:02:33.173816       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0612 15:03:39.678287   13752 command_runner.go:130] ! I0612 22:02:33.174120       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0612 15:03:39.678287   13752 command_runner.go:130] ! I0612 22:02:33.174130       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0612 15:03:39.678287   13752 command_runner.go:130] ! I0612 22:02:33.184678       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0612 15:03:39.678357   13752 command_runner.go:130] ! I0612 22:02:33.186030       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0612 15:03:39.678397   13752 command_runner.go:130] ! I0612 22:02:33.192152       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0612 15:03:39.678412   13752 command_runner.go:130] ! I0612 22:02:33.192257       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0612 15:03:39.678436   13752 command_runner.go:130] ! I0612 22:02:33.192268       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0612 15:03:39.678475   13752 command_runner.go:130] ! I0612 22:02:33.194361       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0612 15:03:39.678890   13752 command_runner.go:130] ! I0612 22:02:33.194659       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0612 15:03:39.678890   13752 command_runner.go:130] ! I0612 22:02:33.194671       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0612 15:03:39.678890   13752 command_runner.go:130] ! I0612 22:02:33.200378       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0612 15:03:39.678890   13752 command_runner.go:130] ! I0612 22:02:33.200552       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0612 15:03:39.678890   13752 command_runner.go:130] ! I0612 22:02:33.200579       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0612 15:03:39.678890   13752 command_runner.go:130] ! I0612 22:02:33.203400       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0612 15:03:39.678890   13752 command_runner.go:130] ! I0612 22:02:33.203797       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0612 15:03:39.678890   13752 command_runner.go:130] ! I0612 22:02:33.203967       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0612 15:03:39.678890   13752 command_runner.go:130] ! I0612 22:02:33.207566       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0612 15:03:39.678890   13752 command_runner.go:130] ! I0612 22:02:33.207732       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0612 15:03:39.678890   13752 command_runner.go:130] ! I0612 22:02:33.207743       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0612 15:03:39.678890   13752 command_runner.go:130] ! I0612 22:02:33.207766       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0612 15:03:39.678890   13752 command_runner.go:130] ! I0612 22:02:33.214389       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.214572       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.214655       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.220603       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.221181       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.222958       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0612 15:03:39.679662   13752 command_runner.go:130] ! E0612 22:02:33.228603       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.228994       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.253059       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.253281       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.253292       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.264081       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.266480       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.266606       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.266742       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.380173       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.380458       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.380796       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.398346       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.401718       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.401737       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.495874       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.496386       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0612 15:03:39.680256   13752 command_runner.go:130] ! I0612 22:02:33.498064       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0612 15:03:39.680511   13752 command_runner.go:130] ! I0612 22:02:33.698817       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0612 15:03:39.680511   13752 command_runner.go:130] ! I0612 22:02:33.699215       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0612 15:03:39.680511   13752 command_runner.go:130] ! I0612 22:02:33.699646       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0612 15:03:39.680511   13752 command_runner.go:130] ! I0612 22:02:33.744449       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0612 15:03:39.681086   13752 command_runner.go:130] ! I0612 22:02:33.744531       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0612 15:03:39.681143   13752 command_runner.go:130] ! I0612 22:02:33.744546       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0612 15:03:39.681143   13752 command_runner.go:130] ! E0612 22:02:33.807267       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0612 15:03:39.681143   13752 command_runner.go:130] ! I0612 22:02:33.807295       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0612 15:03:39.681143   13752 command_runner.go:130] ! I0612 22:02:33.856639       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0612 15:03:39.684604   13752 command_runner.go:130] ! I0612 22:02:33.857088       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0612 15:03:39.684604   13752 command_runner.go:130] ! I0612 22:02:33.857273       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0612 15:03:39.684604   13752 command_runner.go:130] ! I0612 22:02:33.894016       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0612 15:03:39.685165   13752 command_runner.go:130] ! I0612 22:02:33.896048       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0612 15:03:39.685165   13752 command_runner.go:130] ! I0612 22:02:33.896083       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0612 15:03:39.685165   13752 command_runner.go:130] ! I0612 22:02:33.950707       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0612 15:03:39.685214   13752 command_runner.go:130] ! I0612 22:02:33.950731       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0612 15:03:39.685271   13752 command_runner.go:130] ! I0612 22:02:33.950771       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:33.950821       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:33.950870       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:33.995005       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:33.995247       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.062766       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.063067       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.063362       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.063411       1 shared_informer.go:313] Waiting for caches to sync for node
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.068203       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.068603       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.068777       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.071309       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.071638       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.071795       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.080804       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.097810       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.100018       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.100030       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.102193       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000\" does not exist"
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.102337       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m02\" does not exist"
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.102640       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.102796       1 shared_informer.go:320] Caches are synced for TTL
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.102925       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m03\" does not exist"
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.102986       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.113771       1 shared_informer.go:320] Caches are synced for GC
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.115010       1 shared_informer.go:320] Caches are synced for endpoint
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.115463       1 shared_informer.go:320] Caches are synced for cronjob
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.119062       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.121259       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0612 15:03:39.685888   13752 command_runner.go:130] ! I0612 22:02:44.124526       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0612 15:03:39.685888   13752 command_runner.go:130] ! I0612 22:02:44.124650       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0612 15:03:39.685888   13752 command_runner.go:130] ! I0612 22:02:44.124971       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0612 15:03:39.685888   13752 command_runner.go:130] ! I0612 22:02:44.126246       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0612 15:03:39.685949   13752 command_runner.go:130] ! I0612 22:02:44.133682       1 shared_informer.go:320] Caches are synced for taint
	I0612 15:03:39.685949   13752 command_runner.go:130] ! I0612 22:02:44.134026       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0612 15:03:39.685949   13752 command_runner.go:130] ! I0612 22:02:44.141044       1 shared_informer.go:320] Caches are synced for service account
	I0612 15:03:39.685949   13752 command_runner.go:130] ! I0612 22:02:44.145563       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0612 15:03:39.685949   13752 command_runner.go:130] ! I0612 22:02:44.158513       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0612 15:03:39.686027   13752 command_runner.go:130] ! I0612 22:02:44.162319       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000"
	I0612 15:03:39.686027   13752 command_runner.go:130] ! I0612 22:02:44.162613       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000-m02"
	I0612 15:03:39.686027   13752 command_runner.go:130] ! I0612 22:02:44.162653       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000-m03"
	I0612 15:03:39.686027   13752 command_runner.go:130] ! I0612 22:02:44.163186       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0612 15:03:39.686027   13752 command_runner.go:130] ! I0612 22:02:44.164074       1 shared_informer.go:320] Caches are synced for node
	I0612 15:03:39.686027   13752 command_runner.go:130] ! I0612 22:02:44.164451       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0612 15:03:39.686027   13752 command_runner.go:130] ! I0612 22:02:44.164672       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0612 15:03:39.686027   13752 command_runner.go:130] ! I0612 22:02:44.164769       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0612 15:03:39.686027   13752 command_runner.go:130] ! I0612 22:02:44.164780       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0612 15:03:39.686027   13752 command_runner.go:130] ! I0612 22:02:44.167842       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0612 15:03:39.686027   13752 command_runner.go:130] ! I0612 22:02:44.174384       1 shared_informer.go:320] Caches are synced for daemon sets
	I0612 15:03:39.686027   13752 command_runner.go:130] ! I0612 22:02:44.182521       1 shared_informer.go:320] Caches are synced for namespace
	I0612 15:03:39.686027   13752 command_runner.go:130] ! I0612 22:02:44.186460       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0612 15:03:39.686027   13752 command_runner.go:130] ! I0612 22:02:44.194992       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0612 15:03:39.686027   13752 command_runner.go:130] ! I0612 22:02:44.196327       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0612 15:03:39.686660   13752 command_runner.go:130] ! I0612 22:02:44.196530       1 shared_informer.go:320] Caches are synced for job
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.196665       1 shared_informer.go:320] Caches are synced for deployment
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.200768       1 shared_informer.go:320] Caches are synced for HPA
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.200988       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.201846       1 shared_informer.go:320] Caches are synced for PV protection
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.207493       1 shared_informer.go:320] Caches are synced for crt configmap
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.228051       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="25.792655ms"
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.231633       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="89.306µs"
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.244808       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.644732ms"
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.246402       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.002µs"
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.297636       1 shared_informer.go:320] Caches are synced for PVC protection
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.304265       1 shared_informer.go:320] Caches are synced for stateful set
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.304486       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.311023       1 shared_informer.go:320] Caches are synced for disruption
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.350865       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.351039       1 shared_informer.go:320] Caches are synced for ephemeral
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.353535       1 shared_informer.go:320] Caches are synced for persistent volume
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.369296       1 shared_informer.go:320] Caches are synced for attach detach
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.372273       1 shared_informer.go:320] Caches are synced for expand
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.381442       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.821842       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.870923       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.871005       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:03:11.878868       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:03:24.254264       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.921834ms"
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:03:24.256639       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.601µs"
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:03:37.832133       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="82.001µs"
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:03:37.905221       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.518825ms"
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:03:37.905853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.201µs"
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:03:37.917312       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.821108ms"
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:03:37.917472       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.3µs"
	I0612 15:03:39.705861   13752 logs.go:123] Gathering logs for kube-controller-manager [685d167da53c] ...
	I0612 15:03:39.707413   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 685d167da53c"
	I0612 15:03:39.734182   13752 command_runner.go:130] ! I0612 21:39:26.275086       1 serving.go:380] Generated self-signed cert in-memory
	I0612 15:03:39.742015   13752 command_runner.go:130] ! I0612 21:39:26.758419       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:26.759036       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:26.761311       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:26.761663       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:26.762454       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:26.762652       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.260969       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.261096       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0612 15:03:39.742106   13752 command_runner.go:130] ! E0612 21:39:31.316508       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.316587       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.342032       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.342287       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.342304       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.362243       1 shared_informer.go:320] Caches are synced for tokens
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.399024       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.399081       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.399264       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.443376       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.443603       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.443617       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.480477       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.480993       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.481007       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.523943       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.524182       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.524535       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.524741       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.553194       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.554412       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.556852       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.560273       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.560448       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.561614       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0612 15:03:39.742629   13752 command_runner.go:130] ! I0612 21:39:31.561933       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.593308       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.593438       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.593459       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.593488       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.593534       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.593588       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.593611       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.593650       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.593684       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.593701       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.593721       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.593739       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.593950       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.594051       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.594202       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.594262       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.594286       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.594306       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.594500       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.594602       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.594857       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.594957       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.595276       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0612 15:03:39.743244   13752 command_runner.go:130] ! I0612 21:39:31.595463       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:31.605247       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:31.605722       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:31.607199       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:31.668704       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:31.669329       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:31.669521       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:31.820968       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:31.821104       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:31.821117       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:31.973500       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:31.973543       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:31.975344       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:31.975377       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:32.163715       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:32.163860       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:32.320380       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:32.320516       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:32.320529       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:32.468817       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:32.468893       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:32.636144       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:32.636921       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:32.637331       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:32.775300       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:32.776007       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:32.778803       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:32.920254       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:32.920359       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:32.920902       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:33.069533       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0612 15:03:39.743880   13752 command_runner.go:130] ! I0612 21:39:33.069689       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0612 15:03:39.743880   13752 command_runner.go:130] ! I0612 21:39:33.069704       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0612 15:03:39.743880   13752 command_runner.go:130] ! I0612 21:39:33.069713       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0612 15:03:39.743880   13752 command_runner.go:130] ! I0612 21:39:33.115693       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0612 15:03:39.744052   13752 command_runner.go:130] ! I0612 21:39:33.115796       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:33.115809       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:33.116021       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:33.116257       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:33.116416       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:33.169481       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:33.169523       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:33.169561       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:33.170619       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:33.170693       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:33.170745       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:33.171426       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:33.171458       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:33.171479       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:33.172032       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:33.172160       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:33.172352       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:33.172295       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:43.229790       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:43.230104       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:43.230715       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:43.230868       1 shared_informer.go:313] Waiting for caches to sync for node
	I0612 15:03:39.744167   13752 command_runner.go:130] ! E0612 21:39:43.246433       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0612 15:03:39.744691   13752 command_runner.go:130] ! I0612 21:39:43.246740       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0612 15:03:39.744818   13752 command_runner.go:130] ! I0612 21:39:43.246878       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.247178       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.259694       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.260105       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.260326       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.287038       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.287747       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.289545       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.296881       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.297485       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.297679       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.315673       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.316362       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.316724       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.331329       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.331610       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.331966       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.358081       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.358485       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.358595       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.358609       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.373221       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.373371       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.373388       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.386049       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.386265       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.387457       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.473855       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.474115       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.474421       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.622457       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.622831       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.622950       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.776632       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0612 15:03:39.745696   13752 command_runner.go:130] ! I0612 21:39:43.777149       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0612 15:03:39.745742   13752 command_runner.go:130] ! I0612 21:39:43.777203       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:43.923199       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:43.923416       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:43.923557       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.219008       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.219041       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.219093       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.219104       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.375322       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.375879       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.375896       1 shared_informer.go:313] Waiting for caches to sync for job
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.419335       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.419357       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.419672       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.435364       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.441191       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000\" does not exist"
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.456985       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.457052       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.460648       1 shared_informer.go:320] Caches are synced for GC
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.463138       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.469825       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.469846       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.469856       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.471608       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.471748       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.472789       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.474041       1 shared_informer.go:320] Caches are synced for TTL
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.475483       1 shared_informer.go:320] Caches are synced for PVC protection
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.475505       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.476080       1 shared_informer.go:320] Caches are synced for job
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.479252       1 shared_informer.go:320] Caches are synced for ephemeral
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.481788       1 shared_informer.go:320] Caches are synced for service account
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.488300       1 shared_informer.go:320] Caches are synced for persistent volume
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.491059       1 shared_informer.go:320] Caches are synced for namespace
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.499063       1 shared_informer.go:320] Caches are synced for cronjob
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.500304       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0612 15:03:39.746367   13752 command_runner.go:130] ! I0612 21:39:44.507471       1 shared_informer.go:320] Caches are synced for daemon sets
	I0612 15:03:39.746367   13752 command_runner.go:130] ! I0612 21:39:44.525355       1 shared_informer.go:320] Caches are synced for taint
	I0612 15:03:39.746367   13752 command_runner.go:130] ! I0612 21:39:44.525889       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0612 15:03:39.746367   13752 command_runner.go:130] ! I0612 21:39:44.526177       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000"
	I0612 15:03:39.746367   13752 command_runner.go:130] ! I0612 21:39:44.526390       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0612 15:03:39.746367   13752 command_runner.go:130] ! I0612 21:39:44.526550       1 shared_informer.go:320] Caches are synced for HPA
	I0612 15:03:39.746367   13752 command_runner.go:130] ! I0612 21:39:44.526951       1 shared_informer.go:320] Caches are synced for stateful set
	I0612 15:03:39.746367   13752 command_runner.go:130] ! I0612 21:39:44.527038       1 shared_informer.go:320] Caches are synced for deployment
	I0612 15:03:39.746367   13752 command_runner.go:130] ! I0612 21:39:44.528601       1 shared_informer.go:320] Caches are synced for PV protection
	I0612 15:03:39.746367   13752 command_runner.go:130] ! I0612 21:39:44.528834       1 shared_informer.go:320] Caches are synced for crt configmap
	I0612 15:03:39.746536   13752 command_runner.go:130] ! I0612 21:39:44.531261       1 shared_informer.go:320] Caches are synced for node
	I0612 15:03:39.746536   13752 command_runner.go:130] ! I0612 21:39:44.531462       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0612 15:03:39.746536   13752 command_runner.go:130] ! I0612 21:39:44.531679       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0612 15:03:39.746536   13752 command_runner.go:130] ! I0612 21:39:44.531942       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0612 15:03:39.746536   13752 command_runner.go:130] ! I0612 21:39:44.532097       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0612 15:03:39.746622   13752 command_runner.go:130] ! I0612 21:39:44.532523       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0612 15:03:39.746622   13752 command_runner.go:130] ! I0612 21:39:44.537873       1 shared_informer.go:320] Caches are synced for expand
	I0612 15:03:39.746622   13752 command_runner.go:130] ! I0612 21:39:44.543447       1 shared_informer.go:320] Caches are synced for attach detach
	I0612 15:03:39.746622   13752 command_runner.go:130] ! I0612 21:39:44.564610       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0612 15:03:39.746622   13752 command_runner.go:130] ! I0612 21:39:44.568950       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025000" podCIDRs=["10.244.0.0/24"]
	I0612 15:03:39.746709   13752 command_runner.go:130] ! I0612 21:39:44.621264       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0612 15:03:39.746709   13752 command_runner.go:130] ! I0612 21:39:44.644803       1 shared_informer.go:320] Caches are synced for endpoint
	I0612 15:03:39.746709   13752 command_runner.go:130] ! I0612 21:39:44.677466       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0612 15:03:39.746709   13752 command_runner.go:130] ! I0612 21:39:44.696400       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 15:03:39.746709   13752 command_runner.go:130] ! I0612 21:39:44.723303       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0612 15:03:39.746709   13752 command_runner.go:130] ! I0612 21:39:44.735837       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 15:03:39.746789   13752 command_runner.go:130] ! I0612 21:39:44.758870       1 shared_informer.go:320] Caches are synced for disruption
	I0612 15:03:39.746789   13752 command_runner.go:130] ! I0612 21:39:45.157877       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 15:03:39.746789   13752 command_runner.go:130] ! I0612 21:39:45.226557       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 15:03:39.746789   13752 command_runner.go:130] ! I0612 21:39:45.226973       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0612 15:03:39.746789   13752 command_runner.go:130] ! I0612 21:39:45.795416       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="243.746414ms"
	I0612 15:03:39.746887   13752 command_runner.go:130] ! I0612 21:39:45.868449       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.90937ms"
	I0612 15:03:39.746887   13752 command_runner.go:130] ! I0612 21:39:45.868845       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="122.402µs"
	I0612 15:03:39.746887   13752 command_runner.go:130] ! I0612 21:39:45.869382       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="206.903µs"
	I0612 15:03:39.746963   13752 command_runner.go:130] ! I0612 21:39:45.905402       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="386.807µs"
	I0612 15:03:39.746963   13752 command_runner.go:130] ! I0612 21:39:46.349409       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="105.452815ms"
	I0612 15:03:39.746963   13752 command_runner.go:130] ! I0612 21:39:46.386321       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.301621ms"
	I0612 15:03:39.746963   13752 command_runner.go:130] ! I0612 21:39:46.386974       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="616.309µs"
	I0612 15:03:39.747039   13752 command_runner.go:130] ! I0612 21:39:56.441072       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="366.601µs"
	I0612 15:03:39.747039   13752 command_runner.go:130] ! I0612 21:39:56.465727       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.4µs"
	I0612 15:03:39.747039   13752 command_runner.go:130] ! I0612 21:39:57.870560       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="68.5µs"
	I0612 15:03:39.747039   13752 command_runner.go:130] ! I0612 21:39:58.874445       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.448319ms"
	I0612 15:03:39.747124   13752 command_runner.go:130] ! I0612 21:39:58.875168       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="103.901µs"
	I0612 15:03:39.747124   13752 command_runner.go:130] ! I0612 21:39:59.529553       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0612 15:03:39.747204   13752 command_runner.go:130] ! I0612 21:42:39.169243       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m02\" does not exist"
	I0612 15:03:39.747204   13752 command_runner.go:130] ! I0612 21:42:39.188142       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025000-m02" podCIDRs=["10.244.1.0/24"]
	I0612 15:03:39.747204   13752 command_runner.go:130] ! I0612 21:42:39.563565       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000-m02"
	I0612 15:03:39.747278   13752 command_runner.go:130] ! I0612 21:42:58.063730       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:39.747278   13752 command_runner.go:130] ! I0612 21:43:24.138579       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.052538ms"
	I0612 15:03:39.747278   13752 command_runner.go:130] ! I0612 21:43:24.156190       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.434267ms"
	I0612 15:03:39.747278   13752 command_runner.go:130] ! I0612 21:43:24.156677       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.099µs"
	I0612 15:03:39.747364   13752 command_runner.go:130] ! I0612 21:43:24.183391       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.299µs"
	I0612 15:03:39.747364   13752 command_runner.go:130] ! I0612 21:43:26.908415       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.051448ms"
	I0612 15:03:39.747364   13752 command_runner.go:130] ! I0612 21:43:26.908853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34µs"
	I0612 15:03:39.747364   13752 command_runner.go:130] ! I0612 21:43:27.296932       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.474956ms"
	I0612 15:03:39.747440   13752 command_runner.go:130] ! I0612 21:43:27.304566       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.488944ms"
	I0612 15:03:39.747440   13752 command_runner.go:130] ! I0612 21:47:16.485552       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:39.747440   13752 command_runner.go:130] ! I0612 21:47:16.486568       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m03\" does not exist"
	I0612 15:03:39.747521   13752 command_runner.go:130] ! I0612 21:47:16.503987       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025000-m03" podCIDRs=["10.244.2.0/24"]
	I0612 15:03:39.747521   13752 command_runner.go:130] ! I0612 21:47:19.629018       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000-m03"
	I0612 15:03:39.747521   13752 command_runner.go:130] ! I0612 21:47:35.032365       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:39.747635   13752 command_runner.go:130] ! I0612 21:55:19.767980       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:39.747676   13752 command_runner.go:130] ! I0612 21:57:52.374240       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:39.747676   13752 command_runner.go:130] ! I0612 21:57:58.774442       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m03\" does not exist"
	I0612 15:03:39.747768   13752 command_runner.go:130] ! I0612 21:57:58.774588       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:39.747768   13752 command_runner.go:130] ! I0612 21:57:58.809041       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025000-m03" podCIDRs=["10.244.3.0/24"]
	I0612 15:03:39.747807   13752 command_runner.go:130] ! I0612 21:58:06.126407       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:39.747807   13752 command_runner.go:130] ! I0612 21:59:45.222238       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:39.759960   13752 logs.go:123] Gathering logs for Docker ...
	I0612 15:03:39.759960   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0612 15:03:39.793299   13752 command_runner.go:130] > Jun 12 22:00:59 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0612 15:03:39.793299   13752 command_runner.go:130] > Jun 12 22:00:59 minikube cri-dockerd[222]: time="2024-06-12T22:00:59Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0612 15:03:39.793410   13752 command_runner.go:130] > Jun 12 22:00:59 minikube cri-dockerd[222]: time="2024-06-12T22:00:59Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0612 15:03:39.793410   13752 command_runner.go:130] > Jun 12 22:00:59 minikube cri-dockerd[222]: time="2024-06-12T22:00:59Z" level=info msg="Start docker client with request timeout 0s"
	I0612 15:03:39.793410   13752 command_runner.go:130] > Jun 12 22:00:59 minikube cri-dockerd[222]: time="2024-06-12T22:00:59Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0612 15:03:39.793410   13752 command_runner.go:130] > Jun 12 22:01:00 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0612 15:03:39.793410   13752 command_runner.go:130] > Jun 12 22:01:00 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0612 15:03:39.793536   13752 command_runner.go:130] > Jun 12 22:01:00 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0612 15:03:39.793536   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0612 15:03:39.793536   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0612 15:03:39.793612   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0612 15:03:39.793612   13752 command_runner.go:130] > Jun 12 22:01:02 minikube cri-dockerd[400]: time="2024-06-12T22:01:02Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0612 15:03:39.793612   13752 command_runner.go:130] > Jun 12 22:01:02 minikube cri-dockerd[400]: time="2024-06-12T22:01:02Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0612 15:03:39.793612   13752 command_runner.go:130] > Jun 12 22:01:02 minikube cri-dockerd[400]: time="2024-06-12T22:01:02Z" level=info msg="Start docker client with request timeout 0s"
	I0612 15:03:39.793737   13752 command_runner.go:130] > Jun 12 22:01:02 minikube cri-dockerd[400]: time="2024-06-12T22:01:02Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0612 15:03:39.793737   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0612 15:03:39.793737   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0612 15:03:39.793737   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0612 15:03:39.793842   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0612 15:03:39.793842   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0612 15:03:39.793842   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0612 15:03:39.793842   13752 command_runner.go:130] > Jun 12 22:01:04 minikube cri-dockerd[420]: time="2024-06-12T22:01:04Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0612 15:03:39.793842   13752 command_runner.go:130] > Jun 12 22:01:04 minikube cri-dockerd[420]: time="2024-06-12T22:01:04Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0612 15:03:39.793948   13752 command_runner.go:130] > Jun 12 22:01:04 minikube cri-dockerd[420]: time="2024-06-12T22:01:04Z" level=info msg="Start docker client with request timeout 0s"
	I0612 15:03:39.793948   13752 command_runner.go:130] > Jun 12 22:01:04 minikube cri-dockerd[420]: time="2024-06-12T22:01:04Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0612 15:03:39.793948   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0612 15:03:39.793948   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0612 15:03:39.793948   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0612 15:03:39.794082   13752 command_runner.go:130] > Jun 12 22:01:07 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0612 15:03:39.794082   13752 command_runner.go:130] > Jun 12 22:01:07 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0612 15:03:39.794082   13752 command_runner.go:130] > Jun 12 22:01:07 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0612 15:03:39.794082   13752 command_runner.go:130] > Jun 12 22:01:07 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0612 15:03:39.794082   13752 command_runner.go:130] > Jun 12 22:01:07 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0612 15:03:39.794167   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 systemd[1]: Starting Docker Application Container Engine...
	I0612 15:03:39.794167   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[647]: time="2024-06-12T22:01:50.903212301Z" level=info msg="Starting up"
	I0612 15:03:39.794167   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[647]: time="2024-06-12T22:01:50.904075211Z" level=info msg="containerd not running, starting managed containerd"
	I0612 15:03:39.794167   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[647]: time="2024-06-12T22:01:50.905013523Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=653
	I0612 15:03:39.794167   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.936715611Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0612 15:03:39.794246   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.960715605Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0612 15:03:39.794246   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.960765806Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0612 15:03:39.794246   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.960836707Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0612 15:03:39.794324   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.961045509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:39.794358   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.961654317Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:39.794394   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.961681417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:39.794394   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.961916220Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:39.794514   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.962126123Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:39.794567   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.962152723Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0612 15:03:39.794590   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.962167223Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:39.794590   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.962695730Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:39.794590   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.963400938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:39.794658   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.966083771Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:39.794658   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.966199872Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:39.794742   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.966330074Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:39.794742   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.966461076Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0612 15:03:39.794742   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.967039883Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0612 15:03:39.794822   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.967257385Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0612 15:03:39.794822   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.967282486Z" level=info msg="metadata content store policy set" policy=shared
	I0612 15:03:39.794822   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974400773Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0612 15:03:39.794822   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974631276Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0612 15:03:39.794822   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974732277Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0612 15:03:39.794906   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974755077Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0612 15:03:39.794906   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974771478Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0612 15:03:39.794906   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974844078Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0612 15:03:39.794984   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975137982Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0612 15:03:39.794984   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975475986Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0612 15:03:39.794984   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975634588Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0612 15:03:39.794984   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975657088Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0612 15:03:39.795191   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975672789Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0612 15:03:39.795191   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975691989Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0612 15:03:39.795191   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975721989Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0612 15:03:39.795191   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975744389Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0612 15:03:39.795191   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975762790Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0612 15:03:39.795303   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975776490Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0612 15:03:39.795303   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975789190Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0612 15:03:39.795303   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975800790Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0612 15:03:39.795303   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975819990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.795393   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975835091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.795393   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975847091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.795393   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975859491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.795393   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975870791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.795479   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975883291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.795479   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975894491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.795479   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975906891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.795479   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975920192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.795562   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975935492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.795562   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975947192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.795562   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975958792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.795649   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975971092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.795649   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975989492Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0612 15:03:39.795649   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976009893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.795649   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976030193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.795745   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976044093Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0612 15:03:39.795745   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976167595Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0612 15:03:39.795745   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976210595Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0612 15:03:39.795845   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976227295Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0612 15:03:39.795845   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976239996Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0612 15:03:39.795936   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976250696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.795936   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976263096Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0612 15:03:39.795936   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976273096Z" level=info msg="NRI interface is disabled by configuration."
	I0612 15:03:39.796015   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976489199Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0612 15:03:39.796015   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976766002Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0612 15:03:39.796015   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976819403Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0612 15:03:39.796015   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976839003Z" level=info msg="containerd successfully booted in 0.042772s"
	I0612 15:03:39.796110   13752 command_runner.go:130] > Jun 12 22:01:51 multinode-025000 dockerd[647]: time="2024-06-12T22:01:51.958896661Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0612 15:03:39.796110   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.175284022Z" level=info msg="Loading containers: start."
	I0612 15:03:39.796110   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.600253538Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0612 15:03:39.796110   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.679773678Z" level=info msg="Loading containers: done."
	I0612 15:03:39.796187   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.711890198Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0612 15:03:39.796187   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.712661408Z" level=info msg="Daemon has completed initialization"
	I0612 15:03:39.796187   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.774658419Z" level=info msg="API listen on /var/run/docker.sock"
	I0612 15:03:39.796187   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.774960723Z" level=info msg="API listen on [::]:2376"
	I0612 15:03:39.796264   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 systemd[1]: Started Docker Application Container Engine.
	I0612 15:03:39.796264   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 dockerd[647]: time="2024-06-12T22:02:17.292813222Z" level=info msg="Processing signal 'terminated'"
	I0612 15:03:39.796264   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 systemd[1]: Stopping Docker Application Container Engine...
	I0612 15:03:39.796264   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 dockerd[647]: time="2024-06-12T22:02:17.294859626Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0612 15:03:39.796264   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 dockerd[647]: time="2024-06-12T22:02:17.295213927Z" level=info msg="Daemon shutdown complete"
	I0612 15:03:39.796341   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 dockerd[647]: time="2024-06-12T22:02:17.295258527Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0612 15:03:39.796341   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 dockerd[647]: time="2024-06-12T22:02:17.295281927Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0612 15:03:39.796341   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 systemd[1]: docker.service: Deactivated successfully.
	I0612 15:03:39.796341   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 systemd[1]: Stopped Docker Application Container Engine.
	I0612 15:03:39.796341   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 systemd[1]: Starting Docker Application Container Engine...
	I0612 15:03:39.796418   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:18.376333019Z" level=info msg="Starting up"
	I0612 15:03:39.796418   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:18.377520222Z" level=info msg="containerd not running, starting managed containerd"
	I0612 15:03:39.796418   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:18.378639425Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1050
	I0612 15:03:39.796418   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.412854304Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0612 15:03:39.796418   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437361860Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0612 15:03:39.796418   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437471260Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0612 15:03:39.796556   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437558660Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0612 15:03:39.796556   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437600861Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:39.796556   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437638361Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:39.796643   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437674061Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:39.796643   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437957561Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:39.796709   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.438006462Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:39.796709   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.438028962Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0612 15:03:39.796787   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.438041362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:39.796855   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.438072362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:39.796855   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.438209862Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:39.796902   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441166869Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:39.796902   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441307169Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:39.796994   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441467569Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:39.796994   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441599370Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0612 15:03:39.797048   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441629870Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0612 15:03:39.797087   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441648170Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0612 15:03:39.797131   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441660470Z" level=info msg="metadata content store policy set" policy=shared
	I0612 15:03:39.797131   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442075271Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0612 15:03:39.797131   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442166571Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0612 15:03:39.797131   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442187871Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0612 15:03:39.797198   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442201971Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0612 15:03:39.797198   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442217371Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0612 15:03:39.797198   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442266071Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0612 15:03:39.797276   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442474372Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0612 15:03:39.797276   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442551072Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0612 15:03:39.797332   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442567272Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0612 15:03:39.797332   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442579372Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0612 15:03:39.797392   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442592672Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0612 15:03:39.797392   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442605072Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0612 15:03:39.797392   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442627672Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0612 15:03:39.797445   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442645772Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0612 15:03:39.797445   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442660172Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0612 15:03:39.797445   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442671872Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0612 15:03:39.797523   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442683572Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0612 15:03:39.797523   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442694372Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0612 15:03:39.797581   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442714572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.797581   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442727972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.797581   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442739972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.797645   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442754772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.797645   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442766572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.797703   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442778073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.797703   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442788873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.797703   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442800473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.797768   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442812673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.797768   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442826373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.797768   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442837973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.797768   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442849073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.797833   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442860373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.797833   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442875173Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0612 15:03:39.797833   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442974073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.797912   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442994973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.797912   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443006773Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0612 15:03:39.797963   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443066573Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0612 15:03:39.798003   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443088973Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0612 15:03:39.798003   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443100473Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0612 15:03:39.798040   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443113173Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0612 15:03:39.798040   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443144073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.798184   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443156573Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0612 15:03:39.798184   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443166273Z" level=info msg="NRI interface is disabled by configuration."
	I0612 15:03:39.798184   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443418874Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0612 15:03:39.798232   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443494174Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0612 15:03:39.798270   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443534574Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0612 15:03:39.798270   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443571274Z" level=info msg="containerd successfully booted in 0.033238s"
	I0612 15:03:39.798310   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.419757425Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0612 15:03:39.798310   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.449018892Z" level=info msg="Loading containers: start."
	I0612 15:03:39.798348   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.739331061Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0612 15:03:39.798387   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.815989438Z" level=info msg="Loading containers: done."
	I0612 15:03:39.798387   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.842536299Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.842674899Z" level=info msg="Daemon has completed initialization"
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.885012997Z" level=info msg="API listen on /var/run/docker.sock"
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.885608398Z" level=info msg="API listen on [::]:2376"
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 systemd[1]: Started Docker Application Container Engine.
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Start docker client with request timeout 0s"
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Loaded network plugin cni"
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Start cri-dockerd grpc backend"
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:25Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-vgcxw_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"894c58e9fe752e78b8e86cbbaabc1b6cc78ebcce37e4fc0bf1d838420f80a94d\""
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:25Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-45qqd_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"84a9b747663ca262bb35bb462ba83da0c104aee08928bd92a44297ee225d4c27\""
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.449365529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.449468129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.449499429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.449616229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.464315863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.464397563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.464444563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.464765264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.578440826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.581064832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.582145135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.799035   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.582532135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.799035   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.617373216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.799109   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.617486816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.799109   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.617504016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.799109   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.617593816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.799224   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/da184577f0371664d0a472b38bbfcfd866178308bf69eaabdaefb47d30a7057a/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a228f6c30fdf44f53a40ac14a2a8b995155f743739957ac413c700924fc873ed/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/20cbfb3fb853177b89366d165b6a1f67628b2c429266b77034ee6d1ca68b7bac/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/76517193a960ab9d78db3449c72d4b8285bbf321f947b06f8088487d36423fd7/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.094370315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.094456516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.094499716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.094865116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.162934973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.163009674Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.163029074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.163177074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.167659984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.170028290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.170289390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.171053192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.233482736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.233861237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.234167138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.234578639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:31Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.197280978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.198144480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.198158780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.198341381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.799839   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.213822116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.799839   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.213977717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.799910   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.214060117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.799910   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.214298317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.799910   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.234135963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.800008   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.234182263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.234192563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.234264863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/435c56b0fbbbb46e4b392ac6467c2054ce16271a6b3dad2d53f747f839b4b3cd/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5287b61207e62a3ec16408b08af503462a8bed945d441422fd0b733e752d6217/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.564394224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.564548725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.564602325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.565056126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.630517377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.630663477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.630850678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.635052387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a20975d81b350d77bb2d9d69d861d19ddbcbab33211643f61e2aaa0d6dc46a9d/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.972834166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.973545267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.974028469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.974235669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 dockerd[1044]: time="2024-06-12T22:03:03.121297409Z" level=info msg="ignoring event" container=3546a5c00321078fed32a806a318f4e56e89801ea54ea9463adf37f82327b38a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:03.122616734Z" level=info msg="shim disconnected" id=3546a5c00321078fed32a806a318f4e56e89801ea54ea9463adf37f82327b38a namespace=moby
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:03.123474651Z" level=warning msg="cleaning up after shim disconnected" id=3546a5c00321078fed32a806a318f4e56e89801ea54ea9463adf37f82327b38a namespace=moby
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:03.123682355Z" level=info msg="cleaning up dead shim" namespace=moby
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:03:13 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:13.819634342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:03:13 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:13.819751243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.800625   13752 command_runner.go:130] > Jun 12 22:03:13 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:13.819788644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.800625   13752 command_runner.go:130] > Jun 12 22:03:13 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:13.820654753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.800690   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.004015440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.800690   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.004176540Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.800690   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.004193540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.800804   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.005298945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.800804   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.006561551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.800838   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.006633551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.800882   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.006681251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.800913   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.006796752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.800913   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:03:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/986567ef57643aec05ae5353795c364b380cb0f13c2ba98b1c4e04897e7b2e46/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:39.800913   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:03:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2434f89aefe0079002e81e136580c67ef1dca28bfa3b4c1e950241aea9663d4a/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0612 15:03:39.800913   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.542434894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.800913   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.542705495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.800913   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.542742195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.800913   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.543238997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.800913   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.606926167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.800913   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.606994167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.800913   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.607017268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.800913   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.607410069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.822326   13752 logs.go:123] Gathering logs for container status ...
	I0612 15:03:39.822326   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 15:03:39.885944   13752 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0612 15:03:39.886241   13752 command_runner.go:130] > f2a949d407287       8c811b4aec35f                                                                                         3 seconds ago        Running             busybox                   1                   2434f89aefe00       busybox-fc5497c4f-45qqd
	I0612 15:03:39.886368   13752 command_runner.go:130] > 26e5daf354e36       cbb01a7bd410d                                                                                         3 seconds ago        Running             coredns                   1                   986567ef57643       coredns-7db6d8ff4d-vgcxw
	I0612 15:03:39.886368   13752 command_runner.go:130] > 448e057077ddc       6e38f40d628db                                                                                         26 seconds ago       Running             storage-provisioner       2                   5287b61207e62       storage-provisioner
	I0612 15:03:39.886368   13752 command_runner.go:130] > cccfd1e9fef5e       ac1c61439df46                                                                                         About a minute ago   Running             kindnet-cni               1                   a20975d81b350       kindnet-bqlg8
	I0612 15:03:39.886368   13752 command_runner.go:130] > 3546a5c003210       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   5287b61207e62       storage-provisioner
	I0612 15:03:39.886368   13752 command_runner.go:130] > 227a905829b07       747097150317f                                                                                         About a minute ago   Running             kube-proxy                1                   435c56b0fbbbb       kube-proxy-47lr8
	I0612 15:03:39.886368   13752 command_runner.go:130] > 6b61f5f6483d5       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   76517193a960a       etcd-multinode-025000
	I0612 15:03:39.886368   13752 command_runner.go:130] > bbe2d2e51b5f3       91be940803172                                                                                         About a minute ago   Running             kube-apiserver            0                   20cbfb3fb8531       kube-apiserver-multinode-025000
	I0612 15:03:39.886368   13752 command_runner.go:130] > 7acc8ff0a9317       25a1387cdab82                                                                                         About a minute ago   Running             kube-controller-manager   1                   a228f6c30fdf4       kube-controller-manager-multinode-025000
	I0612 15:03:39.886368   13752 command_runner.go:130] > 755750ecd1e39       a52dc94f0a912                                                                                         About a minute ago   Running             kube-scheduler            1                   da184577f0371       kube-scheduler-multinode-025000
	I0612 15:03:39.886368   13752 command_runner.go:130] > bfc0382d49a48       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   84a9b747663ca       busybox-fc5497c4f-45qqd
	I0612 15:03:39.886368   13752 command_runner.go:130] > e83cf4eef49e4       cbb01a7bd410d                                                                                         23 minutes ago       Exited              coredns                   0                   894c58e9fe752       coredns-7db6d8ff4d-vgcxw
	I0612 15:03:39.886368   13752 command_runner.go:130] > 4d60d82f6bc5d       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              23 minutes ago       Exited              kindnet-cni               0                   92f2d5f19e95e       kindnet-bqlg8
	I0612 15:03:39.886368   13752 command_runner.go:130] > c4842faba751e       747097150317f                                                                                         23 minutes ago       Exited              kube-proxy                0                   fad98f611536b       kube-proxy-47lr8
	I0612 15:03:39.886368   13752 command_runner.go:130] > 6b021c195669e       a52dc94f0a912                                                                                         24 minutes ago       Exited              kube-scheduler            0                   d9933fdc9ca72       kube-scheduler-multinode-025000
	I0612 15:03:39.887099   13752 command_runner.go:130] > 685d167da53c9       25a1387cdab82                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   bb4351fab502e       kube-controller-manager-multinode-025000
	I0612 15:03:39.889651   13752 logs.go:123] Gathering logs for dmesg ...
	I0612 15:03:39.889651   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 15:03:39.910819   13752 command_runner.go:130] > [Jun12 22:00] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0612 15:03:39.910819   13752 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0612 15:03:39.910819   13752 command_runner.go:130] > [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0612 15:03:39.910819   13752 command_runner.go:130] > [  +0.131000] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0612 15:03:39.910917   13752 command_runner.go:130] > [  +0.025099] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0612 15:03:39.910917   13752 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +0.064850] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +0.023448] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0612 15:03:39.911062   13752 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +5.508165] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +1.342262] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +1.269809] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +7.259362] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0612 15:03:39.911062   13752 command_runner.go:130] > [Jun12 22:01] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +0.155290] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	I0612 15:03:39.911062   13752 command_runner.go:130] > [Jun12 22:02] systemd-fstab-generator[971]: Ignoring "noauto" option for root device
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +0.095843] kauditd_printk_skb: 73 callbacks suppressed
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +0.507476] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +0.171390] systemd-fstab-generator[1022]: Ignoring "noauto" option for root device
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +0.210222] systemd-fstab-generator[1036]: Ignoring "noauto" option for root device
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +2.904531] systemd-fstab-generator[1224]: Ignoring "noauto" option for root device
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +0.189304] systemd-fstab-generator[1237]: Ignoring "noauto" option for root device
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +0.162041] systemd-fstab-generator[1248]: Ignoring "noauto" option for root device
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +0.261611] systemd-fstab-generator[1263]: Ignoring "noauto" option for root device
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +0.815328] systemd-fstab-generator[1374]: Ignoring "noauto" option for root device
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +0.096217] kauditd_printk_skb: 205 callbacks suppressed
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +3.646175] systemd-fstab-generator[1510]: Ignoring "noauto" option for root device
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +1.441935] kauditd_printk_skb: 54 callbacks suppressed
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +5.624550] kauditd_printk_skb: 20 callbacks suppressed
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +3.644538] systemd-fstab-generator[2322]: Ignoring "noauto" option for root device
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +8.250122] kauditd_printk_skb: 70 callbacks suppressed
	I0612 15:03:39.913122   13752 logs.go:123] Gathering logs for coredns [e83cf4eef49e] ...
	I0612 15:03:39.913122   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83cf4eef49e"
	I0612 15:03:39.941056   13752 command_runner.go:130] > .:53
	I0612 15:03:39.942203   13752 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 9f7dc1bade6b5769fb289c890c4bc60268e74645c2ad6eb7d326d3f775fd92cb51f1ac39274894772e6760c31275de0003978af82f0f289ef8d45827e8140e48
	I0612 15:03:39.942203   13752 command_runner.go:130] > CoreDNS-1.11.1
	I0612 15:03:39.942203   13752 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 127.0.0.1:53490 - 39118 "HINFO IN 4677201826540465335.2322207397622737457. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.048277073s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.0.3:49256 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000267302s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.0.3:54623 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.08558s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.0.3:51804 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.048771085s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.0.3:53027 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.100151983s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.1.2:34534 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001199s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.1.2:44985 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000141701s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.1.2:54544 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.0000543s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.1.2:55517 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000123601s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.0.3:42995 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099501s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.0.3:51839 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.135718274s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.0.3:52123 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000304602s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.0.3:36740 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000274801s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.0.3:48333 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.003287018s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.0.3:55754 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000962s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.0.3:51695 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000224102s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.0.3:49605 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096301s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.1.2:37746 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000283001s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.1.2:54995 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000106501s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.1.2:49201 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077401s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.1.2:60577 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077201s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.1.2:36057 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000107301s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.1.2:43898 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.1.2:49177 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091201s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.1.2:45207 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000584s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.0.3:36676 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151001s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.0.3:60305 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000305802s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.0.3:37468 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000209201s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.0.3:34743 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125201s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.1.2:45035 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000240801s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.1.2:42306 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000309601s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.1.2:36509 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000152901s
	I0612 15:03:39.942829   13752 command_runner.go:130] > [INFO] 10.244.1.2:55614 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000545s
	I0612 15:03:39.943001   13752 command_runner.go:130] > [INFO] 10.244.0.3:39195 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130301s
	I0612 15:03:39.943227   13752 command_runner.go:130] > [INFO] 10.244.0.3:34618 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000272902s
	I0612 15:03:39.943290   13752 command_runner.go:130] > [INFO] 10.244.0.3:44444 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000177201s
	I0612 15:03:39.943290   13752 command_runner.go:130] > [INFO] 10.244.0.3:35691 - 5 "PTR IN 1.192.23.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001307s
	I0612 15:03:39.943290   13752 command_runner.go:130] > [INFO] 10.244.1.2:51174 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110501s
	I0612 15:03:39.943290   13752 command_runner.go:130] > [INFO] 10.244.1.2:41925 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000207401s
	I0612 15:03:39.943290   13752 command_runner.go:130] > [INFO] 10.244.1.2:44306 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000736s
	I0612 15:03:39.943290   13752 command_runner.go:130] > [INFO] 10.244.1.2:46158 - 5 "PTR IN 1.192.23.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000547s
	I0612 15:03:39.943290   13752 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0612 15:03:39.943290   13752 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0612 15:03:39.946705   13752 logs.go:123] Gathering logs for kube-scheduler [6b021c195669] ...
	I0612 15:03:39.947327   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b021c195669"
	I0612 15:03:39.977128   13752 command_runner.go:130] ! I0612 21:39:26.474423       1 serving.go:380] Generated self-signed cert in-memory
	I0612 15:03:39.977299   13752 command_runner.go:130] ! W0612 21:39:28.263287       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0612 15:03:39.977299   13752 command_runner.go:130] ! W0612 21:39:28.263543       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:39.977299   13752 command_runner.go:130] ! W0612 21:39:28.263706       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0612 15:03:39.977299   13752 command_runner.go:130] ! W0612 21:39:28.263849       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0612 15:03:39.977299   13752 command_runner.go:130] ! I0612 21:39:28.303051       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0612 15:03:39.977299   13752 command_runner.go:130] ! I0612 21:39:28.305840       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:39.977426   13752 command_runner.go:130] ! I0612 21:39:28.310682       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0612 15:03:39.977426   13752 command_runner.go:130] ! I0612 21:39:28.312812       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0612 15:03:39.977426   13752 command_runner.go:130] ! I0612 21:39:28.313421       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0612 15:03:39.977426   13752 command_runner.go:130] ! I0612 21:39:28.313594       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 15:03:39.977426   13752 command_runner.go:130] ! W0612 21:39:28.336905       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:39.977426   13752 command_runner.go:130] ! E0612 21:39:28.337826       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:39.977590   13752 command_runner.go:130] ! W0612 21:39:28.338227       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0612 15:03:39.977590   13752 command_runner.go:130] ! E0612 21:39:28.338391       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0612 15:03:39.977703   13752 command_runner.go:130] ! W0612 21:39:28.338652       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0612 15:03:39.977703   13752 command_runner.go:130] ! E0612 21:39:28.338896       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0612 15:03:39.977794   13752 command_runner.go:130] ! W0612 21:39:28.339195       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0612 15:03:39.977794   13752 command_runner.go:130] ! E0612 21:39:28.339406       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0612 15:03:39.977794   13752 command_runner.go:130] ! W0612 21:39:28.339694       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0612 15:03:39.977794   13752 command_runner.go:130] ! E0612 21:39:28.339892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0612 15:03:39.977910   13752 command_runner.go:130] ! W0612 21:39:28.340188       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0612 15:03:39.977910   13752 command_runner.go:130] ! E0612 21:39:28.340362       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0612 15:03:39.977910   13752 command_runner.go:130] ! W0612 21:39:28.340697       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:39.977910   13752 command_runner.go:130] ! E0612 21:39:28.341129       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:39.978061   13752 command_runner.go:130] ! W0612 21:39:28.341447       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:39.978115   13752 command_runner.go:130] ! E0612 21:39:28.341664       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:39.978115   13752 command_runner.go:130] ! W0612 21:39:28.341989       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0612 15:03:39.978115   13752 command_runner.go:130] ! E0612 21:39:28.342229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0612 15:03:39.978115   13752 command_runner.go:130] ! W0612 21:39:28.342540       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:39.978115   13752 command_runner.go:130] ! E0612 21:39:28.344839       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:39.978115   13752 command_runner.go:130] ! W0612 21:39:28.345316       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0612 15:03:39.978115   13752 command_runner.go:130] ! E0612 21:39:28.347872       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0612 15:03:39.978115   13752 command_runner.go:130] ! W0612 21:39:28.345596       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:39.978115   13752 command_runner.go:130] ! W0612 21:39:28.345651       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0612 15:03:39.978115   13752 command_runner.go:130] ! W0612 21:39:28.345691       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0612 15:03:39.978115   13752 command_runner.go:130] ! W0612 21:39:28.345823       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0612 15:03:39.978115   13752 command_runner.go:130] ! E0612 21:39:28.348490       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:39.978115   13752 command_runner.go:130] ! E0612 21:39:28.348742       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0612 15:03:39.978115   13752 command_runner.go:130] ! E0612 21:39:28.349066       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0612 15:03:39.978115   13752 command_runner.go:130] ! E0612 21:39:28.349147       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0612 15:03:39.978646   13752 command_runner.go:130] ! W0612 21:39:29.192073       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0612 15:03:39.978646   13752 command_runner.go:130] ! E0612 21:39:29.192126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0612 15:03:39.978646   13752 command_runner.go:130] ! W0612 21:39:29.249000       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:39.978646   13752 command_runner.go:130] ! E0612 21:39:29.249248       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:39.978646   13752 command_runner.go:130] ! W0612 21:39:29.268880       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0612 15:03:39.978824   13752 command_runner.go:130] ! E0612 21:39:29.268972       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0612 15:03:39.978824   13752 command_runner.go:130] ! W0612 21:39:29.271696       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:39.978824   13752 command_runner.go:130] ! E0612 21:39:29.271839       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:39.978928   13752 command_runner.go:130] ! W0612 21:39:29.275489       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0612 15:03:39.978928   13752 command_runner.go:130] ! E0612 21:39:29.275551       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0612 15:03:39.979010   13752 command_runner.go:130] ! W0612 21:39:29.296739       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:39.979010   13752 command_runner.go:130] ! E0612 21:39:29.297145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:39.979010   13752 command_runner.go:130] ! W0612 21:39:29.433593       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0612 15:03:39.979110   13752 command_runner.go:130] ! E0612 21:39:29.433887       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0612 15:03:39.979110   13752 command_runner.go:130] ! W0612 21:39:29.471880       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0612 15:03:39.979110   13752 command_runner.go:130] ! E0612 21:39:29.471994       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0612 15:03:39.979196   13752 command_runner.go:130] ! W0612 21:39:29.482669       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:39.979196   13752 command_runner.go:130] ! E0612 21:39:29.483008       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:39.979275   13752 command_runner.go:130] ! W0612 21:39:29.569402       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0612 15:03:39.979275   13752 command_runner.go:130] ! E0612 21:39:29.571433       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0612 15:03:39.979275   13752 command_runner.go:130] ! W0612 21:39:29.677906       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0612 15:03:39.979364   13752 command_runner.go:130] ! E0612 21:39:29.677950       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0612 15:03:39.979364   13752 command_runner.go:130] ! W0612 21:39:29.687951       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0612 15:03:39.979462   13752 command_runner.go:130] ! E0612 21:39:29.688054       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0612 15:03:39.979462   13752 command_runner.go:130] ! W0612 21:39:29.780288       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0612 15:03:39.979539   13752 command_runner.go:130] ! E0612 21:39:29.780411       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0612 15:03:39.979539   13752 command_runner.go:130] ! W0612 21:39:29.832564       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0612 15:03:39.979539   13752 command_runner.go:130] ! E0612 21:39:29.832892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0612 15:03:39.979617   13752 command_runner.go:130] ! W0612 21:39:29.889591       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:39.979617   13752 command_runner.go:130] ! E0612 21:39:29.889868       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:39.979617   13752 command_runner.go:130] ! I0612 21:39:32.513980       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0612 15:03:39.979617   13752 command_runner.go:130] ! E0612 22:00:01.172050       1 run.go:74] "command failed" err="finished without leader elect"
	I0612 15:03:39.988944   13752 logs.go:123] Gathering logs for kindnet [cccfd1e9fef5] ...
	I0612 15:03:39.988944   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccfd1e9fef5"
	I0612 15:03:40.009560   13752 command_runner.go:130] ! I0612 22:02:33.621070       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0612 15:03:40.015182   13752 command_runner.go:130] ! I0612 22:02:33.621857       1 main.go:107] hostIP = 172.23.200.184
	I0612 15:03:40.015182   13752 command_runner.go:130] ! podIP = 172.23.200.184
	I0612 15:03:40.015182   13752 command_runner.go:130] ! I0612 22:02:33.622055       1 main.go:116] setting mtu 1500 for CNI 
	I0612 15:03:40.015182   13752 command_runner.go:130] ! I0612 22:02:33.622069       1 main.go:146] kindnetd IP family: "ipv4"
	I0612 15:03:40.015182   13752 command_runner.go:130] ! I0612 22:02:33.622082       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0612 15:03:40.015182   13752 command_runner.go:130] ! I0612 22:03:03.928722       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0612 15:03:40.015182   13752 command_runner.go:130] ! I0612 22:03:03.948068       1 main.go:223] Handling node with IPs: map[172.23.200.184:{}]
	I0612 15:03:40.015182   13752 command_runner.go:130] ! I0612 22:03:03.948207       1 main.go:227] handling current node
	I0612 15:03:40.015182   13752 command_runner.go:130] ! I0612 22:03:04.015006       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.015311   13752 command_runner.go:130] ! I0612 22:03:04.015280       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.015311   13752 command_runner.go:130] ! I0612 22:03:04.015617       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.23.196.105 Flags: [] Table: 0} 
	I0612 15:03:40.015311   13752 command_runner.go:130] ! I0612 22:03:04.015960       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:40.015311   13752 command_runner.go:130] ! I0612 22:03:04.015976       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:40.015311   13752 command_runner.go:130] ! I0612 22:03:04.016053       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.23.206.72 Flags: [] Table: 0} 
	I0612 15:03:40.015311   13752 command_runner.go:130] ! I0612 22:03:14.032118       1 main.go:223] Handling node with IPs: map[172.23.200.184:{}]
	I0612 15:03:40.015311   13752 command_runner.go:130] ! I0612 22:03:14.032228       1 main.go:227] handling current node
	I0612 15:03:40.015410   13752 command_runner.go:130] ! I0612 22:03:14.032243       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.015410   13752 command_runner.go:130] ! I0612 22:03:14.032255       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.015410   13752 command_runner.go:130] ! I0612 22:03:14.032739       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:40.015470   13752 command_runner.go:130] ! I0612 22:03:14.032836       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:40.015503   13752 command_runner.go:130] ! I0612 22:03:24.045393       1 main.go:223] Handling node with IPs: map[172.23.200.184:{}]
	I0612 15:03:40.015503   13752 command_runner.go:130] ! I0612 22:03:24.045492       1 main.go:227] handling current node
	I0612 15:03:40.015503   13752 command_runner.go:130] ! I0612 22:03:24.045504       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.015572   13752 command_runner.go:130] ! I0612 22:03:24.045510       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.015572   13752 command_runner.go:130] ! I0612 22:03:24.045926       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:40.015572   13752 command_runner.go:130] ! I0612 22:03:24.045941       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:40.015572   13752 command_runner.go:130] ! I0612 22:03:34.052186       1 main.go:223] Handling node with IPs: map[172.23.200.184:{}]
	I0612 15:03:40.015572   13752 command_runner.go:130] ! I0612 22:03:34.052288       1 main.go:227] handling current node
	I0612 15:03:40.015572   13752 command_runner.go:130] ! I0612 22:03:34.052302       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.015671   13752 command_runner.go:130] ! I0612 22:03:34.052309       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.015671   13752 command_runner.go:130] ! I0612 22:03:34.052423       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:40.015671   13752 command_runner.go:130] ! I0612 22:03:34.052452       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:40.017991   13752 logs.go:123] Gathering logs for describe nodes ...
	I0612 15:03:40.018063   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0612 15:03:40.257940   13752 command_runner.go:130] > Name:               multinode-025000
	I0612 15:03:40.257940   13752 command_runner.go:130] > Roles:              control-plane
	I0612 15:03:40.257940   13752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0612 15:03:40.257940   13752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0612 15:03:40.257940   13752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0612 15:03:40.257940   13752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-025000
	I0612 15:03:40.257940   13752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0612 15:03:40.257940   13752 command_runner.go:130] >                     minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97
	I0612 15:03:40.257940   13752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-025000
	I0612 15:03:40.257940   13752 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0612 15:03:40.257940   13752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_12T14_39_32_0700
	I0612 15:03:40.257940   13752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0612 15:03:40.257940   13752 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0612 15:03:40.257940   13752 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0612 15:03:40.257940   13752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0612 15:03:40.257940   13752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0612 15:03:40.257940   13752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0612 15:03:40.257940   13752 command_runner.go:130] > CreationTimestamp:  Wed, 12 Jun 2024 21:39:28 +0000
	I0612 15:03:40.257940   13752 command_runner.go:130] > Taints:             <none>
	I0612 15:03:40.257940   13752 command_runner.go:130] > Unschedulable:      false
	I0612 15:03:40.257940   13752 command_runner.go:130] > Lease:
	I0612 15:03:40.257940   13752 command_runner.go:130] >   HolderIdentity:  multinode-025000
	I0612 15:03:40.257940   13752 command_runner.go:130] >   AcquireTime:     <unset>
	I0612 15:03:40.257940   13752 command_runner.go:130] >   RenewTime:       Wed, 12 Jun 2024 22:03:32 +0000
	I0612 15:03:40.257940   13752 command_runner.go:130] > Conditions:
	I0612 15:03:40.257940   13752 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0612 15:03:40.257940   13752 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0612 15:03:40.257940   13752 command_runner.go:130] >   MemoryPressure   False   Wed, 12 Jun 2024 22:03:11 +0000   Wed, 12 Jun 2024 21:39:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0612 15:03:40.257940   13752 command_runner.go:130] >   DiskPressure     False   Wed, 12 Jun 2024 22:03:11 +0000   Wed, 12 Jun 2024 21:39:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0612 15:03:40.257940   13752 command_runner.go:130] >   PIDPressure      False   Wed, 12 Jun 2024 22:03:11 +0000   Wed, 12 Jun 2024 21:39:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0612 15:03:40.258608   13752 command_runner.go:130] >   Ready            True    Wed, 12 Jun 2024 22:03:11 +0000   Wed, 12 Jun 2024 22:03:11 +0000   KubeletReady                 kubelet is posting ready status
	I0612 15:03:40.258608   13752 command_runner.go:130] > Addresses:
	I0612 15:03:40.258608   13752 command_runner.go:130] >   InternalIP:  172.23.200.184
	I0612 15:03:40.258608   13752 command_runner.go:130] >   Hostname:    multinode-025000
	I0612 15:03:40.258608   13752 command_runner.go:130] > Capacity:
	I0612 15:03:40.258608   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:40.258608   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:40.258608   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:40.258608   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:40.258608   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:40.258608   13752 command_runner.go:130] > Allocatable:
	I0612 15:03:40.258608   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:40.258608   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:40.258608   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:40.258608   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:40.258608   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:40.258608   13752 command_runner.go:130] > System Info:
	I0612 15:03:40.258608   13752 command_runner.go:130] >   Machine ID:                 e65e28dfa5bf4f27a0123e4ae1007793
	I0612 15:03:40.258608   13752 command_runner.go:130] >   System UUID:                3e5a42d3-ea80-0c4d-ad18-4b76e4f3e22f
	I0612 15:03:40.258608   13752 command_runner.go:130] >   Boot ID:                    0efecf43-b070-4a8f-b542-4d1fd07306ad
	I0612 15:03:40.258608   13752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0612 15:03:40.258608   13752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0612 15:03:40.258608   13752 command_runner.go:130] >   Operating System:           linux
	I0612 15:03:40.258608   13752 command_runner.go:130] >   Architecture:               amd64
	I0612 15:03:40.258608   13752 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0612 15:03:40.258608   13752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0612 15:03:40.258608   13752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0612 15:03:40.258608   13752 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0612 15:03:40.258608   13752 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0612 15:03:40.258608   13752 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0612 15:03:40.258608   13752 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0612 15:03:40.258608   13752 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0612 15:03:40.258608   13752 command_runner.go:130] >   default                     busybox-fc5497c4f-45qqd                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0612 15:03:40.259191   13752 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-vgcxw                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     23m
	I0612 15:03:40.259191   13752 command_runner.go:130] >   kube-system                 etcd-multinode-025000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         69s
	I0612 15:03:40.259191   13752 command_runner.go:130] >   kube-system                 kindnet-bqlg8                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	I0612 15:03:40.259191   13752 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-025000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         69s
	I0612 15:03:40.259191   13752 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-025000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I0612 15:03:40.259191   13752 command_runner.go:130] >   kube-system                 kube-proxy-47lr8                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0612 15:03:40.259191   13752 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-025000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I0612 15:03:40.259191   13752 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0612 15:03:40.259401   13752 command_runner.go:130] > Allocated resources:
	I0612 15:03:40.259401   13752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0612 15:03:40.259401   13752 command_runner.go:130] >   Resource           Requests     Limits
	I0612 15:03:40.259401   13752 command_runner.go:130] >   --------           --------     ------
	I0612 15:03:40.259401   13752 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0612 15:03:40.259401   13752 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0612 15:03:40.259401   13752 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0612 15:03:40.259493   13752 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0612 15:03:40.259493   13752 command_runner.go:130] > Events:
	I0612 15:03:40.259493   13752 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0612 15:03:40.259493   13752 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0612 15:03:40.259493   13752 command_runner.go:130] >   Normal  Starting                 23m                kube-proxy       
	I0612 15:03:40.259493   13752 command_runner.go:130] >   Normal  Starting                 66s                kube-proxy       
	I0612 15:03:40.259579   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node multinode-025000 status is now: NodeHasSufficientMemory
	I0612 15:03:40.259579   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node multinode-025000 status is now: NodeHasNoDiskPressure
	I0612 15:03:40.259579   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node multinode-025000 status is now: NodeHasSufficientPID
	I0612 15:03:40.259579   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:40.259672   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-025000 status is now: NodeHasNoDiskPressure
	I0612 15:03:40.259672   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-025000 status is now: NodeHasSufficientMemory
	I0612 15:03:40.259672   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-025000 status is now: NodeHasSufficientPID
	I0612 15:03:40.259672   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:40.259672   13752 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0612 15:03:40.259758   13752 command_runner.go:130] >   Normal  RegisteredNode           23m                node-controller  Node multinode-025000 event: Registered Node multinode-025000 in Controller
	I0612 15:03:40.259758   13752 command_runner.go:130] >   Normal  NodeReady                23m                kubelet          Node multinode-025000 status is now: NodeReady
	I0612 15:03:40.259758   13752 command_runner.go:130] >   Normal  Starting                 75s                kubelet          Starting kubelet.
	I0612 15:03:40.259758   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  75s (x8 over 75s)  kubelet          Node multinode-025000 status is now: NodeHasSufficientMemory
	I0612 15:03:40.259758   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    75s (x8 over 75s)  kubelet          Node multinode-025000 status is now: NodeHasNoDiskPressure
	I0612 15:03:40.259850   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     75s (x7 over 75s)  kubelet          Node multinode-025000 status is now: NodeHasSufficientPID
	I0612 15:03:40.259850   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  75s                kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:40.259850   13752 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-025000 event: Registered Node multinode-025000 in Controller
	I0612 15:03:40.259850   13752 command_runner.go:130] > Name:               multinode-025000-m02
	I0612 15:03:40.259850   13752 command_runner.go:130] > Roles:              <none>
	I0612 15:03:40.259850   13752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0612 15:03:40.259850   13752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0612 15:03:40.259947   13752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0612 15:03:40.259947   13752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-025000-m02
	I0612 15:03:40.259947   13752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0612 15:03:40.259947   13752 command_runner.go:130] >                     minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97
	I0612 15:03:40.259947   13752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-025000
	I0612 15:03:40.259947   13752 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0612 15:03:40.260030   13752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_12T14_42_39_0700
	I0612 15:03:40.260030   13752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0612 15:03:40.260030   13752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0612 15:03:40.260030   13752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0612 15:03:40.260030   13752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0612 15:03:40.260113   13752 command_runner.go:130] > CreationTimestamp:  Wed, 12 Jun 2024 21:42:39 +0000
	I0612 15:03:40.260113   13752 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0612 15:03:40.260113   13752 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0612 15:03:40.260113   13752 command_runner.go:130] > Unschedulable:      false
	I0612 15:03:40.260113   13752 command_runner.go:130] > Lease:
	I0612 15:03:40.260113   13752 command_runner.go:130] >   HolderIdentity:  multinode-025000-m02
	I0612 15:03:40.260197   13752 command_runner.go:130] >   AcquireTime:     <unset>
	I0612 15:03:40.260197   13752 command_runner.go:130] >   RenewTime:       Wed, 12 Jun 2024 21:59:20 +0000
	I0612 15:03:40.260197   13752 command_runner.go:130] > Conditions:
	I0612 15:03:40.260197   13752 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0612 15:03:40.260197   13752 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0612 15:03:40.260197   13752 command_runner.go:130] >   MemoryPressure   Unknown   Wed, 12 Jun 2024 21:58:59 +0000   Wed, 12 Jun 2024 22:03:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:40.260197   13752 command_runner.go:130] >   DiskPressure     Unknown   Wed, 12 Jun 2024 21:58:59 +0000   Wed, 12 Jun 2024 22:03:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:40.260197   13752 command_runner.go:130] >   PIDPressure      Unknown   Wed, 12 Jun 2024 21:58:59 +0000   Wed, 12 Jun 2024 22:03:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:40.260349   13752 command_runner.go:130] >   Ready            Unknown   Wed, 12 Jun 2024 21:58:59 +0000   Wed, 12 Jun 2024 22:03:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:40.260349   13752 command_runner.go:130] > Addresses:
	I0612 15:03:40.260440   13752 command_runner.go:130] >   InternalIP:  172.23.196.105
	I0612 15:03:40.260440   13752 command_runner.go:130] >   Hostname:    multinode-025000-m02
	I0612 15:03:40.260440   13752 command_runner.go:130] > Capacity:
	I0612 15:03:40.260440   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:40.260440   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:40.260440   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:40.260440   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:40.260525   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:40.260525   13752 command_runner.go:130] > Allocatable:
	I0612 15:03:40.260525   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:40.260525   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:40.260525   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:40.260525   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:40.260525   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:40.260525   13752 command_runner.go:130] > System Info:
	I0612 15:03:40.260615   13752 command_runner.go:130] >   Machine ID:                 c11d7ff5518449f8bc8169a1fd7b0c4b
	I0612 15:03:40.260615   13752 command_runner.go:130] >   System UUID:                3b021c48-8479-f34c-83c2-77b944a77c5e
	I0612 15:03:40.260615   13752 command_runner.go:130] >   Boot ID:                    67e77c09-c6b2-4c01-b167-2481dd4a7a96
	I0612 15:03:40.260615   13752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0612 15:03:40.260615   13752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0612 15:03:40.260615   13752 command_runner.go:130] >   Operating System:           linux
	I0612 15:03:40.260615   13752 command_runner.go:130] >   Architecture:               amd64
	I0612 15:03:40.260701   13752 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0612 15:03:40.260701   13752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0612 15:03:40.260701   13752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0612 15:03:40.260701   13752 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0612 15:03:40.260701   13752 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0612 15:03:40.260701   13752 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0612 15:03:40.260701   13752 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0612 15:03:40.260793   13752 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0612 15:03:40.260793   13752 command_runner.go:130] >   default                     busybox-fc5497c4f-9bsls    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0612 15:03:40.260793   13752 command_runner.go:130] >   kube-system                 kindnet-v4cqk              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	I0612 15:03:40.260793   13752 command_runner.go:130] >   kube-system                 kube-proxy-tdcdp           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I0612 15:03:40.260793   13752 command_runner.go:130] > Allocated resources:
	I0612 15:03:40.260878   13752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0612 15:03:40.260878   13752 command_runner.go:130] >   Resource           Requests   Limits
	I0612 15:03:40.260878   13752 command_runner.go:130] >   --------           --------   ------
	I0612 15:03:40.260878   13752 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0612 15:03:40.260878   13752 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0612 15:03:40.260878   13752 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0612 15:03:40.260968   13752 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0612 15:03:40.260968   13752 command_runner.go:130] > Events:
	I0612 15:03:40.260968   13752 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0612 15:03:40.260968   13752 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0612 15:03:40.260968   13752 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0612 15:03:40.260968   13752 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-025000-m02 event: Registered Node multinode-025000-m02 in Controller
	I0612 15:03:40.260968   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-025000-m02 status is now: NodeHasSufficientMemory
	I0612 15:03:40.261057   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-025000-m02 status is now: NodeHasNoDiskPressure
	I0612 15:03:40.261057   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-025000-m02 status is now: NodeHasSufficientPID
	I0612 15:03:40.261057   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:40.261057   13752 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-025000-m02 status is now: NodeReady
	I0612 15:03:40.261163   13752 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-025000-m02 event: Registered Node multinode-025000-m02 in Controller
	I0612 15:03:40.261163   13752 command_runner.go:130] >   Normal  NodeNotReady             16s                node-controller  Node multinode-025000-m02 status is now: NodeNotReady
	I0612 15:03:40.261163   13752 command_runner.go:130] > Name:               multinode-025000-m03
	I0612 15:03:40.261163   13752 command_runner.go:130] > Roles:              <none>
	I0612 15:03:40.261163   13752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0612 15:03:40.261303   13752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0612 15:03:40.261303   13752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0612 15:03:40.261356   13752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-025000-m03
	I0612 15:03:40.261356   13752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0612 15:03:40.261386   13752 command_runner.go:130] >                     minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97
	I0612 15:03:40.261386   13752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-025000
	I0612 15:03:40.261386   13752 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0612 15:03:40.261422   13752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_12T14_57_59_0700
	I0612 15:03:40.261422   13752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0612 15:03:40.261453   13752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0612 15:03:40.261485   13752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0612 15:03:40.261485   13752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0612 15:03:40.261485   13752 command_runner.go:130] > CreationTimestamp:  Wed, 12 Jun 2024 21:57:58 +0000
	I0612 15:03:40.261520   13752 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0612 15:03:40.261520   13752 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0612 15:03:40.261551   13752 command_runner.go:130] > Unschedulable:      false
	I0612 15:03:40.261551   13752 command_runner.go:130] > Lease:
	I0612 15:03:40.261551   13752 command_runner.go:130] >   HolderIdentity:  multinode-025000-m03
	I0612 15:03:40.261551   13752 command_runner.go:130] >   AcquireTime:     <unset>
	I0612 15:03:40.261551   13752 command_runner.go:130] >   RenewTime:       Wed, 12 Jun 2024 21:59:00 +0000
	I0612 15:03:40.261551   13752 command_runner.go:130] > Conditions:
	I0612 15:03:40.261551   13752 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0612 15:03:40.261551   13752 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0612 15:03:40.261551   13752 command_runner.go:130] >   MemoryPressure   Unknown   Wed, 12 Jun 2024 21:58:06 +0000   Wed, 12 Jun 2024 21:59:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:40.261551   13752 command_runner.go:130] >   DiskPressure     Unknown   Wed, 12 Jun 2024 21:58:06 +0000   Wed, 12 Jun 2024 21:59:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:40.261551   13752 command_runner.go:130] >   PIDPressure      Unknown   Wed, 12 Jun 2024 21:58:06 +0000   Wed, 12 Jun 2024 21:59:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:40.261551   13752 command_runner.go:130] >   Ready            Unknown   Wed, 12 Jun 2024 21:58:06 +0000   Wed, 12 Jun 2024 21:59:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:40.261551   13752 command_runner.go:130] > Addresses:
	I0612 15:03:40.261551   13752 command_runner.go:130] >   InternalIP:  172.23.206.72
	I0612 15:03:40.261551   13752 command_runner.go:130] >   Hostname:    multinode-025000-m03
	I0612 15:03:40.261551   13752 command_runner.go:130] > Capacity:
	I0612 15:03:40.261551   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:40.261551   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:40.261551   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:40.261551   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:40.261551   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:40.261551   13752 command_runner.go:130] > Allocatable:
	I0612 15:03:40.261551   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:40.261551   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:40.261551   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:40.261551   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:40.261551   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:40.261551   13752 command_runner.go:130] > System Info:
	I0612 15:03:40.261551   13752 command_runner.go:130] >   Machine ID:                 b62d5e6740dc42d880d6595ac7dd57ae
	I0612 15:03:40.261551   13752 command_runner.go:130] >   System UUID:                31a13a9b-b7c6-6643-8352-fb322079216a
	I0612 15:03:40.261551   13752 command_runner.go:130] >   Boot ID:                    a21b9eff-2471-4589-9e35-5845aae64358
	I0612 15:03:40.261551   13752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0612 15:03:40.261551   13752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0612 15:03:40.261551   13752 command_runner.go:130] >   Operating System:           linux
	I0612 15:03:40.261551   13752 command_runner.go:130] >   Architecture:               amd64
	I0612 15:03:40.261551   13752 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0612 15:03:40.261551   13752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0612 15:03:40.261551   13752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0612 15:03:40.261551   13752 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0612 15:03:40.261551   13752 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0612 15:03:40.261551   13752 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0612 15:03:40.261551   13752 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0612 15:03:40.261551   13752 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0612 15:03:40.261551   13752 command_runner.go:130] >   kube-system                 kindnet-8252q       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I0612 15:03:40.261551   13752 command_runner.go:130] >   kube-system                 kube-proxy-7jwdg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I0612 15:03:40.261551   13752 command_runner.go:130] > Allocated resources:
	I0612 15:03:40.261551   13752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0612 15:03:40.261551   13752 command_runner.go:130] >   Resource           Requests   Limits
	I0612 15:03:40.261551   13752 command_runner.go:130] >   --------           --------   ------
	I0612 15:03:40.262155   13752 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0612 15:03:40.262155   13752 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0612 15:03:40.262155   13752 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0612 15:03:40.262155   13752 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0612 15:03:40.262155   13752 command_runner.go:130] > Events:
	I0612 15:03:40.262155   13752 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0612 15:03:40.262155   13752 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0612 15:03:40.262155   13752 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0612 15:03:40.262155   13752 command_runner.go:130] >   Normal  Starting                 5m38s                  kube-proxy       
	I0612 15:03:40.262155   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:40.262337   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-025000-m03 status is now: NodeHasSufficientMemory
	I0612 15:03:40.262337   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-025000-m03 status is now: NodeHasNoDiskPressure
	I0612 15:03:40.262337   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-025000-m03 status is now: NodeHasSufficientPID
	I0612 15:03:40.262337   13752 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-025000-m03 status is now: NodeReady
	I0612 15:03:40.262337   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m42s (x2 over 5m42s)  kubelet          Node multinode-025000-m03 status is now: NodeHasSufficientMemory
	I0612 15:03:40.262427   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m42s (x2 over 5m42s)  kubelet          Node multinode-025000-m03 status is now: NodeHasNoDiskPressure
	I0612 15:03:40.262427   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m42s (x2 over 5m42s)  kubelet          Node multinode-025000-m03 status is now: NodeHasSufficientPID
	I0612 15:03:40.262427   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m42s                  kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:40.262427   13752 command_runner.go:130] >   Normal  RegisteredNode           5m41s                  node-controller  Node multinode-025000-m03 event: Registered Node multinode-025000-m03 in Controller
	I0612 15:03:40.262427   13752 command_runner.go:130] >   Normal  NodeReady                5m34s                  kubelet          Node multinode-025000-m03 status is now: NodeReady
	I0612 15:03:40.262526   13752 command_runner.go:130] >   Normal  NodeNotReady             3m55s                  node-controller  Node multinode-025000-m03 status is now: NodeNotReady
	I0612 15:03:40.262526   13752 command_runner.go:130] >   Normal  RegisteredNode           56s                    node-controller  Node multinode-025000-m03 event: Registered Node multinode-025000-m03 in Controller
	I0612 15:03:40.269958   13752 logs.go:123] Gathering logs for kube-apiserver [bbe2d2e51b5f] ...
	I0612 15:03:40.269958   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe2d2e51b5f"
	I0612 15:03:40.294743   13752 command_runner.go:130] ! I0612 22:02:28.032945       1 options.go:221] external host was not specified, using 172.23.200.184
	I0612 15:03:40.294743   13752 command_runner.go:130] ! I0612 22:02:28.036290       1 server.go:148] Version: v1.30.1
	I0612 15:03:40.294743   13752 command_runner.go:130] ! I0612 22:02:28.036339       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:40.306847   13752 command_runner.go:130] ! I0612 22:02:28.916544       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0612 15:03:40.306847   13752 command_runner.go:130] ! I0612 22:02:28.917947       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0612 15:03:40.306847   13752 command_runner.go:130] ! I0612 22:02:28.921952       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0612 15:03:40.306928   13752 command_runner.go:130] ! I0612 22:02:28.922146       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0612 15:03:40.306928   13752 command_runner.go:130] ! I0612 22:02:28.922426       1 instance.go:299] Using reconciler: lease
	I0612 15:03:40.306928   13752 command_runner.go:130] ! I0612 22:02:29.570201       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0612 15:03:40.307121   13752 command_runner.go:130] ! W0612 22:02:29.570355       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:40.307121   13752 command_runner.go:130] ! I0612 22:02:29.801222       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0612 15:03:40.307185   13752 command_runner.go:130] ! I0612 22:02:29.801702       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0612 15:03:40.307185   13752 command_runner.go:130] ! I0612 22:02:30.046166       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0612 15:03:40.307185   13752 command_runner.go:130] ! I0612 22:02:30.216981       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0612 15:03:40.307243   13752 command_runner.go:130] ! I0612 22:02:30.231997       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0612 15:03:40.307286   13752 command_runner.go:130] ! W0612 22:02:30.232097       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:40.307286   13752 command_runner.go:130] ! W0612 22:02:30.232107       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:40.307355   13752 command_runner.go:130] ! I0612 22:02:30.232792       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0612 15:03:40.307355   13752 command_runner.go:130] ! W0612 22:02:30.232881       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:40.307396   13752 command_runner.go:130] ! I0612 22:02:30.233864       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0612 15:03:40.307396   13752 command_runner.go:130] ! I0612 22:02:30.235099       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0612 15:03:40.307396   13752 command_runner.go:130] ! W0612 22:02:30.235211       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0612 15:03:40.307453   13752 command_runner.go:130] ! W0612 22:02:30.235220       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0612 15:03:40.307492   13752 command_runner.go:130] ! I0612 22:02:30.237278       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0612 15:03:40.307492   13752 command_runner.go:130] ! W0612 22:02:30.237314       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0612 15:03:40.307526   13752 command_runner.go:130] ! I0612 22:02:30.238451       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0612 15:03:40.307526   13752 command_runner.go:130] ! W0612 22:02:30.238555       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:40.307564   13752 command_runner.go:130] ! W0612 22:02:30.238564       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:40.307564   13752 command_runner.go:130] ! I0612 22:02:30.239199       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.239289       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.239352       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! I0612 22:02:30.239881       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0612 15:03:40.307590   13752 command_runner.go:130] ! I0612 22:02:30.242982       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.243157       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.243324       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! I0612 22:02:30.245920       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.246121       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.246235       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! I0612 22:02:30.249402       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.249562       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! I0612 22:02:30.255420       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.255587       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.255759       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! I0612 22:02:30.257021       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.257206       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.257308       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! I0612 22:02:30.269872       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.270105       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.270312       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! I0612 22:02:30.272005       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0612 15:03:40.307590   13752 command_runner.go:130] ! I0612 22:02:30.273608       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.273714       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.273724       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! I0612 22:02:30.277668       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.277779       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.277789       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! I0612 22:02:30.280767       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.280916       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.280928       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! I0612 22:02:30.281776       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.281806       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! I0612 22:02:30.296752       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0612 15:03:40.308202   13752 command_runner.go:130] ! W0612 22:02:30.296810       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:40.308202   13752 command_runner.go:130] ! I0612 22:02:30.901606       1 secure_serving.go:213] Serving securely on [::]:8443
	I0612 15:03:40.308255   13752 command_runner.go:130] ! I0612 22:02:30.901766       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0612 15:03:40.308255   13752 command_runner.go:130] ! I0612 22:02:30.903281       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0612 15:03:40.308336   13752 command_runner.go:130] ! I0612 22:02:30.903373       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0612 15:03:40.308336   13752 command_runner.go:130] ! I0612 22:02:30.903401       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0612 15:03:40.308336   13752 command_runner.go:130] ! I0612 22:02:30.903987       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0612 15:03:40.308382   13752 command_runner.go:130] ! I0612 22:02:30.904124       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0612 15:03:40.308382   13752 command_runner.go:130] ! I0612 22:02:30.904843       1 aggregator.go:163] waiting for initial CRD sync...
	I0612 15:03:40.308452   13752 command_runner.go:130] ! I0612 22:02:30.905095       1 controller.go:78] Starting OpenAPI AggregationController
	I0612 15:03:40.308452   13752 command_runner.go:130] ! I0612 22:02:30.906424       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0612 15:03:40.308508   13752 command_runner.go:130] ! I0612 22:02:30.901780       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0612 15:03:40.308508   13752 command_runner.go:130] ! I0612 22:02:30.907108       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0612 15:03:40.308508   13752 command_runner.go:130] ! I0612 22:02:30.907337       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0612 15:03:40.308576   13752 command_runner.go:130] ! I0612 22:02:30.901790       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0612 15:03:40.308616   13752 command_runner.go:130] ! I0612 22:02:30.901800       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 15:03:40.308616   13752 command_runner.go:130] ! I0612 22:02:30.909555       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0612 15:03:40.308616   13752 command_runner.go:130] ! I0612 22:02:30.909699       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0612 15:03:40.308678   13752 command_runner.go:130] ! I0612 22:02:30.910003       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0612 15:03:40.308678   13752 command_runner.go:130] ! I0612 22:02:30.911734       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0612 15:03:40.308678   13752 command_runner.go:130] ! I0612 22:02:30.911846       1 controller.go:116] Starting legacy_token_tracking_controller
	I0612 15:03:40.308678   13752 command_runner.go:130] ! I0612 22:02:30.911861       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0612 15:03:40.308740   13752 command_runner.go:130] ! I0612 22:02:30.912590       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0612 15:03:40.308740   13752 command_runner.go:130] ! I0612 22:02:30.912666       1 available_controller.go:423] Starting AvailableConditionController
	I0612 15:03:40.308740   13752 command_runner.go:130] ! I0612 22:02:30.912673       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0612 15:03:40.308816   13752 command_runner.go:130] ! I0612 22:02:30.913776       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0612 15:03:40.308816   13752 command_runner.go:130] ! I0612 22:02:30.953613       1 controller.go:139] Starting OpenAPI controller
	I0612 15:03:40.308816   13752 command_runner.go:130] ! I0612 22:02:30.953929       1 controller.go:87] Starting OpenAPI V3 controller
	I0612 15:03:40.308816   13752 command_runner.go:130] ! I0612 22:02:30.954278       1 naming_controller.go:291] Starting NamingConditionController
	I0612 15:03:40.308816   13752 command_runner.go:130] ! I0612 22:02:30.954516       1 establishing_controller.go:76] Starting EstablishingController
	I0612 15:03:40.308902   13752 command_runner.go:130] ! I0612 22:02:30.954966       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0612 15:03:40.308902   13752 command_runner.go:130] ! I0612 22:02:30.955230       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0612 15:03:40.308902   13752 command_runner.go:130] ! I0612 22:02:30.955507       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0612 15:03:40.308902   13752 command_runner.go:130] ! I0612 22:02:31.003418       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0612 15:03:40.308973   13752 command_runner.go:130] ! I0612 22:02:31.009966       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0612 15:03:40.308973   13752 command_runner.go:130] ! I0612 22:02:31.010019       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0612 15:03:40.308973   13752 command_runner.go:130] ! I0612 22:02:31.010029       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0612 15:03:40.308973   13752 command_runner.go:130] ! I0612 22:02:31.010400       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0612 15:03:40.309038   13752 command_runner.go:130] ! I0612 22:02:31.011993       1 shared_informer.go:320] Caches are synced for configmaps
	I0612 15:03:40.309038   13752 command_runner.go:130] ! I0612 22:02:31.012756       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0612 15:03:40.309085   13752 command_runner.go:130] ! I0612 22:02:31.017182       1 aggregator.go:165] initial CRD sync complete...
	I0612 15:03:40.309436   13752 command_runner.go:130] ! I0612 22:02:31.017223       1 autoregister_controller.go:141] Starting autoregister controller
	I0612 15:03:40.309436   13752 command_runner.go:130] ! I0612 22:02:31.017231       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0612 15:03:40.309477   13752 command_runner.go:130] ! I0612 22:02:31.017238       1 cache.go:39] Caches are synced for autoregister controller
	I0612 15:03:40.309477   13752 command_runner.go:130] ! I0612 22:02:31.018109       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0612 15:03:40.309519   13752 command_runner.go:130] ! I0612 22:02:31.018524       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0612 15:03:40.309563   13752 command_runner.go:130] ! I0612 22:02:31.019519       1 policy_source.go:224] refreshing policies
	I0612 15:03:40.309563   13752 command_runner.go:130] ! I0612 22:02:31.020420       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0612 15:03:40.309605   13752 command_runner.go:130] ! I0612 22:02:31.091331       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0612 15:03:40.309605   13752 command_runner.go:130] ! I0612 22:02:31.909532       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0612 15:03:40.309648   13752 command_runner.go:130] ! W0612 22:02:32.355789       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.23.198.154 172.23.200.184]
	I0612 15:03:40.309648   13752 command_runner.go:130] ! I0612 22:02:32.358485       1 controller.go:615] quota admission added evaluator for: endpoints
	I0612 15:03:40.309694   13752 command_runner.go:130] ! I0612 22:02:32.377254       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0612 15:03:40.309694   13752 command_runner.go:130] ! I0612 22:02:33.727670       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0612 15:03:40.309760   13752 command_runner.go:130] ! I0612 22:02:34.008881       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0612 15:03:40.309760   13752 command_runner.go:130] ! I0612 22:02:34.034607       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0612 15:03:40.309800   13752 command_runner.go:130] ! I0612 22:02:34.157870       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0612 15:03:40.309800   13752 command_runner.go:130] ! I0612 22:02:34.176471       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0612 15:03:40.309853   13752 command_runner.go:130] ! W0612 22:02:52.350035       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.23.200.184]
	I0612 15:03:40.317267   13752 logs.go:123] Gathering logs for kube-proxy [227a905829b0] ...
	I0612 15:03:40.317267   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227a905829b0"
	I0612 15:03:40.337007   13752 command_runner.go:130] ! I0612 22:02:33.538961       1 server_linux.go:69] "Using iptables proxy"
	I0612 15:03:40.346991   13752 command_runner.go:130] ! I0612 22:02:33.585761       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.200.184"]
	I0612 15:03:40.346991   13752 command_runner.go:130] ! I0612 22:02:33.754056       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 15:03:40.346991   13752 command_runner.go:130] ! I0612 22:02:33.754118       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 15:03:40.346991   13752 command_runner.go:130] ! I0612 22:02:33.754141       1 server_linux.go:165] "Using iptables Proxier"
	I0612 15:03:40.347126   13752 command_runner.go:130] ! I0612 22:02:33.765449       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 15:03:40.347162   13752 command_runner.go:130] ! I0612 22:02:33.766192       1 server.go:872] "Version info" version="v1.30.1"
	I0612 15:03:40.347162   13752 command_runner.go:130] ! I0612 22:02:33.766246       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:40.347162   13752 command_runner.go:130] ! I0612 22:02:33.769980       1 config.go:192] "Starting service config controller"
	I0612 15:03:40.347282   13752 command_runner.go:130] ! I0612 22:02:33.770461       1 config.go:101] "Starting endpoint slice config controller"
	I0612 15:03:40.347333   13752 command_runner.go:130] ! I0612 22:02:33.770493       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 15:03:40.347333   13752 command_runner.go:130] ! I0612 22:02:33.770630       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 15:03:40.347388   13752 command_runner.go:130] ! I0612 22:02:33.773852       1 config.go:319] "Starting node config controller"
	I0612 15:03:40.347433   13752 command_runner.go:130] ! I0612 22:02:33.773944       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 15:03:40.347471   13752 command_runner.go:130] ! I0612 22:02:33.870743       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0612 15:03:40.347510   13752 command_runner.go:130] ! I0612 22:02:33.870698       1 shared_informer.go:320] Caches are synced for service config
	I0612 15:03:40.347510   13752 command_runner.go:130] ! I0612 22:02:33.882534       1 shared_informer.go:320] Caches are synced for node config
	I0612 15:03:40.350077   13752 logs.go:123] Gathering logs for kubelet ...
	I0612 15:03:40.350164   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 15:03:40.379682   13752 command_runner.go:130] > Jun 12 22:02:21 multinode-025000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0612 15:03:40.380450   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1381]: I0612 22:02:22.063456    1381 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0612 15:03:40.380450   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1381]: I0612 22:02:22.064093    1381 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:40.380450   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1381]: I0612 22:02:22.064387    1381 server.go:927] "Client rotation is on, will bootstrap in background"
	I0612 15:03:40.380450   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1381]: E0612 22:02:22.065868    1381 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0612 15:03:40.380450   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0612 15:03:40.380450   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0612 15:03:40.380672   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0612 15:03:40.380672   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0612 15:03:40.380672   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0612 15:03:40.380672   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1437]: I0612 22:02:22.789327    1437 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0612 15:03:40.380672   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1437]: I0612 22:02:22.789465    1437 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:40.380672   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1437]: I0612 22:02:22.790480    1437 server.go:927] "Client rotation is on, will bootstrap in background"
	I0612 15:03:40.380672   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1437]: E0612 22:02:22.790564    1437 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0612 15:03:40.380672   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0612 15:03:40.380874   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0612 15:03:40.380874   13752 command_runner.go:130] > Jun 12 22:02:23 multinode-025000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0612 15:03:40.380874   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0612 15:03:40.380874   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.414046    1517 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0612 15:03:40.380998   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.414147    1517 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:40.380998   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.414632    1517 server.go:927] "Client rotation is on, will bootstrap in background"
	I0612 15:03:40.380998   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.416608    1517 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0612 15:03:40.380998   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.437750    1517 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0612 15:03:40.380998   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.458497    1517 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0612 15:03:40.381143   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.458849    1517 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0612 15:03:40.381143   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.460038    1517 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0612 15:03:40.381300   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.460095    1517 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-025000","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0612 15:03:40.381344   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.464057    1517 topology_manager.go:138] "Creating topology manager with none policy"
	I0612 15:03:40.381380   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.464080    1517 container_manager_linux.go:301] "Creating device plugin manager"
	I0612 15:03:40.381380   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.464924    1517 state_mem.go:36] "Initialized new in-memory state store"
	I0612 15:03:40.381380   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.466519    1517 kubelet.go:400] "Attempting to sync node with API server"
	I0612 15:03:40.381380   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.466546    1517 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0612 15:03:40.381551   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.466613    1517 kubelet.go:312] "Adding apiserver pod source"
	I0612 15:03:40.381551   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.467352    1517 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0612 15:03:40.381551   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: W0612 22:02:25.471384    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-025000&limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:40.381643   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.471502    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-025000&limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:40.381643   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.471869    1517 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.1.4" apiVersion="v1"
	I0612 15:03:40.381643   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.477415    1517 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0612 15:03:40.381729   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: W0612 22:02:25.478424    1517 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0612 15:03:40.381729   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.480523    1517 server.go:1264] "Started kubelet"
	I0612 15:03:40.381729   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: W0612 22:02:25.481568    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:40.381814   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.481666    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:40.381814   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.481865    1517 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0612 15:03:40.381814   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.482789    1517 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0612 15:03:40.381899   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.485497    1517 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0612 15:03:40.381899   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.490040    1517 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.23.200.184:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-025000.17d860d995e00c7b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-025000,UID:multinode-025000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-025000,},FirstTimestamp:2024-06-12 22:02:25.480502395 +0000 UTC m=+0.149388345,LastTimestamp:2024-06-12 22:02:25.480502395 +0000 UTC m=+0.149388345,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-0
25000,}"
	I0612 15:03:40.382008   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.493219    1517 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0612 15:03:40.382008   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.495119    1517 server.go:455] "Adding debug handlers to kubelet server"
	I0612 15:03:40.382008   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.496095    1517 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0612 15:03:40.382008   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.498560    1517 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0612 15:03:40.382008   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.501388    1517 factory.go:221] Registration of the systemd container factory successfully
	I0612 15:03:40.382099   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.501556    1517 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0612 15:03:40.382099   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.501657    1517 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0612 15:03:40.382099   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: W0612 22:02:25.510641    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:40.382219   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.510706    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:40.382292   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.521028    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-025000?timeout=10s\": dial tcp 172.23.200.184:8443: connect: connection refused" interval="200ms"
	I0612 15:03:40.382292   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.554579    1517 reconciler.go:26] "Reconciler: start to sync state"
	I0612 15:03:40.382292   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.594809    1517 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0612 15:03:40.382292   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.595077    1517 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0612 15:03:40.382393   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.595178    1517 state_mem.go:36] "Initialized new in-memory state store"
	I0612 15:03:40.382393   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.598081    1517 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0612 15:03:40.382393   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.598418    1517 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0612 15:03:40.382393   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.598595    1517 policy_none.go:49] "None policy: Start"
	I0612 15:03:40.382393   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.600760    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-025000"
	I0612 15:03:40.382469   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.602144    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.200.184:8443: connect: connection refused" node="multinode-025000"
	I0612 15:03:40.382469   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.610755    1517 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0612 15:03:40.382469   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.610783    1517 state_mem.go:35] "Initializing new in-memory state store"
	I0612 15:03:40.382544   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.610843    1517 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0612 15:03:40.382544   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.611758    1517 state_mem.go:75] "Updated machine memory state"
	I0612 15:03:40.382544   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.613995    1517 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0612 15:03:40.382544   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.614216    1517 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0612 15:03:40.382618   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.615027    1517 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0612 15:03:40.382618   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.615636    1517 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0612 15:03:40.382618   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.615685    1517 kubelet.go:2337] "Starting kubelet main sync loop"
	I0612 15:03:40.382618   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.615730    1517 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0612 15:03:40.382712   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.616221    1517 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0612 15:03:40.382712   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: W0612 22:02:25.632621    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:40.382712   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.632711    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:40.382808   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.634150    1517 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-025000\" not found"
	I0612 15:03:40.382808   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.644874    1517 iptables.go:577] "Could not set up iptables canary" err=<
	I0612 15:03:40.382889   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0612 15:03:40.382889   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0612 15:03:40.382889   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0612 15:03:40.382889   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0612 15:03:40.382968   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.717070    1517 topology_manager.go:215] "Topology Admit Handler" podUID="d6071cd4356268889f798790dc93ce06" podNamespace="kube-system" podName="kube-apiserver-multinode-025000"
	I0612 15:03:40.382968   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.719714    1517 topology_manager.go:215] "Topology Admit Handler" podUID="88de11d8b1aaec126153d44e87c4b5dd" podNamespace="kube-system" podName="kube-controller-manager-multinode-025000"
	I0612 15:03:40.383082   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.720740    1517 topology_manager.go:215] "Topology Admit Handler" podUID="de62e7fd7d0feea82620e745032c1a67" podNamespace="kube-system" podName="kube-scheduler-multinode-025000"
	I0612 15:03:40.383082   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.722295    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-025000?timeout=10s\": dial tcp 172.23.200.184:8443: connect: connection refused" interval="400ms"
	I0612 15:03:40.383082   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.724629    1517 topology_manager.go:215] "Topology Admit Handler" podUID="7b6b5637642f3d915c0db1461c7074e6" podNamespace="kube-system" podName="etcd-multinode-025000"
	I0612 15:03:40.383177   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.725657    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fad98f611536b15941d0f49c694b6b6c39318bca8a66620735a88a81a12d3610"
	I0612 15:03:40.383177   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.725708    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb4351fab502e49592d49234119b810b53c5916eaf732d4ba148b3ad1eed4e6a"
	I0612 15:03:40.383177   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.725720    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b9e051df48486e732da2c72bf2d0e3ec93cf8774632ecedd8825e656ba04a93"
	I0612 15:03:40.383258   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.725728    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2784305b1d5e9a088f0b73ff004b2d9eca305d397de3d7b9912638323d7c66b2"
	I0612 15:03:40.383258   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.725737    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40443305b24f54fea9235d98bfb16f2d550b8914bfa46c0592b5c24be1ad5569"
	I0612 15:03:40.383258   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.736677    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9933fdc9ca72b65b57e5b4b996215763431b87f18af45fdc8195252497e1d9a"
	I0612 15:03:40.383354   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.760928    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="894c58e9fe752e78b8e86cbbaabc1b6cc78ebcce37e4fc0bf1d838420f80a94d"
	I0612 15:03:40.383354   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.777475    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84a9b747663ca262bb35bb462ba83da0c104aee08928bd92a44297ee225d4c27"
	I0612 15:03:40.383453   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.794474    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92f2d5f19e95ea2d1cfe140159a55c94f5d809c3b67661196b1e285ac389537f"
	I0612 15:03:40.383453   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.803790    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-025000"
	I0612 15:03:40.383453   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.804820    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.200.184:8443: connect: connection refused" node="multinode-025000"
	I0612 15:03:40.383533   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885533    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/88de11d8b1aaec126153d44e87c4b5dd-ca-certs\") pod \"kube-controller-manager-multinode-025000\" (UID: \"88de11d8b1aaec126153d44e87c4b5dd\") " pod="kube-system/kube-controller-manager-multinode-025000"
	I0612 15:03:40.383533   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885705    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d6071cd4356268889f798790dc93ce06-ca-certs\") pod \"kube-apiserver-multinode-025000\" (UID: \"d6071cd4356268889f798790dc93ce06\") " pod="kube-system/kube-apiserver-multinode-025000"
	I0612 15:03:40.383611   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885746    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d6071cd4356268889f798790dc93ce06-k8s-certs\") pod \"kube-apiserver-multinode-025000\" (UID: \"d6071cd4356268889f798790dc93ce06\") " pod="kube-system/kube-apiserver-multinode-025000"
	I0612 15:03:40.383611   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885768    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/88de11d8b1aaec126153d44e87c4b5dd-k8s-certs\") pod \"kube-controller-manager-multinode-025000\" (UID: \"88de11d8b1aaec126153d44e87c4b5dd\") " pod="kube-system/kube-controller-manager-multinode-025000"
	I0612 15:03:40.383705   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885803    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/88de11d8b1aaec126153d44e87c4b5dd-kubeconfig\") pod \"kube-controller-manager-multinode-025000\" (UID: \"88de11d8b1aaec126153d44e87c4b5dd\") " pod="kube-system/kube-controller-manager-multinode-025000"
	I0612 15:03:40.383782   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885844    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/88de11d8b1aaec126153d44e87c4b5dd-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-025000\" (UID: \"88de11d8b1aaec126153d44e87c4b5dd\") " pod="kube-system/kube-controller-manager-multinode-025000"
	I0612 15:03:40.383782   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885869    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/de62e7fd7d0feea82620e745032c1a67-kubeconfig\") pod \"kube-scheduler-multinode-025000\" (UID: \"de62e7fd7d0feea82620e745032c1a67\") " pod="kube-system/kube-scheduler-multinode-025000"
	I0612 15:03:40.383877   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885941    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/7b6b5637642f3d915c0db1461c7074e6-etcd-certs\") pod \"etcd-multinode-025000\" (UID: \"7b6b5637642f3d915c0db1461c7074e6\") " pod="kube-system/etcd-multinode-025000"
	I0612 15:03:40.383877   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885970    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/7b6b5637642f3d915c0db1461c7074e6-etcd-data\") pod \"etcd-multinode-025000\" (UID: \"7b6b5637642f3d915c0db1461c7074e6\") " pod="kube-system/etcd-multinode-025000"
	I0612 15:03:40.383956   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885997    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d6071cd4356268889f798790dc93ce06-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-025000\" (UID: \"d6071cd4356268889f798790dc93ce06\") " pod="kube-system/kube-apiserver-multinode-025000"
	I0612 15:03:40.384036   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.886023    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/88de11d8b1aaec126153d44e87c4b5dd-flexvolume-dir\") pod \"kube-controller-manager-multinode-025000\" (UID: \"88de11d8b1aaec126153d44e87c4b5dd\") " pod="kube-system/kube-controller-manager-multinode-025000"
	I0612 15:03:40.384036   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.124157    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-025000?timeout=10s\": dial tcp 172.23.200.184:8443: connect: connection refused" interval="800ms"
	I0612 15:03:40.384036   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: I0612 22:02:26.206204    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-025000"
	I0612 15:03:40.384165   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.207259    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.200.184:8443: connect: connection refused" node="multinode-025000"
	I0612 15:03:40.384165   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: W0612 22:02:26.576346    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-025000&limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:40.384263   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.576490    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-025000&limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:40.384263   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: W0612 22:02:26.832319    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:40.384365   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.832430    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:40.384365   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: W0612 22:02:26.847085    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:40.384365   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.847226    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:40.384479   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: W0612 22:02:26.894179    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:40.384479   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.894251    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:40.384565   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: I0612 22:02:26.910045    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76517193a960ab9d78db3449c72d4b8285bbf321f947b06f8088487d36423fd7"
	I0612 15:03:40.384565   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.925848    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-025000?timeout=10s\": dial tcp 172.23.200.184:8443: connect: connection refused" interval="1.6s"
	I0612 15:03:40.384648   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.967442    1517 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.23.200.184:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-025000.17d860d995e00c7b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-025000,UID:multinode-025000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-025000,},FirstTimestamp:2024-06-12 22:02:25.480502395 +0000 UTC m=+0.149388345,LastTimestamp:2024-06-12 22:02:25.480502395 +0000 UTC m=+0.149388345,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-0
25000,}"
	I0612 15:03:40.384731   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 kubelet[1517]: I0612 22:02:27.008640    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-025000"
	I0612 15:03:40.384731   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 kubelet[1517]: E0612 22:02:27.009541    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.200.184:8443: connect: connection refused" node="multinode-025000"
	I0612 15:03:40.384842   13752 command_runner.go:130] > Jun 12 22:02:28 multinode-025000 kubelet[1517]: I0612 22:02:28.611782    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-025000"
	I0612 15:03:40.384842   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.067503    1517 kubelet_node_status.go:112] "Node was previously registered" node="multinode-025000"
	I0612 15:03:40.384842   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.069193    1517 kubelet_node_status.go:76] "Successfully registered node" node="multinode-025000"
	I0612 15:03:40.384842   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.078543    1517 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0612 15:03:40.384927   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.083746    1517 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0612 15:03:40.384927   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.087512    1517 setters.go:580] "Node became not ready" node="multinode-025000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-12T22:02:31Z","lastTransitionTime":"2024-06-12T22:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0612 15:03:40.384927   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.485482    1517 apiserver.go:52] "Watching apiserver"
	I0612 15:03:40.385023   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.491838    1517 topology_manager.go:215] "Topology Admit Handler" podUID="1f004a05-3f5f-444b-9ac0-88f0e23da904" podNamespace="kube-system" podName="kindnet-bqlg8"
	I0612 15:03:40.385023   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.492246    1517 topology_manager.go:215] "Topology Admit Handler" podUID="10b24fa7-8eea-4fbb-ab18-404e853aa7ab" podNamespace="kube-system" podName="kube-proxy-47lr8"
	I0612 15:03:40.385023   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.493249    1517 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-025000" podUID="6b429685-b322-4b00-83fc-743786ff40e1"
	I0612 15:03:40.385139   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.494355    1517 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-025000" podUID="630bafc4-4576-4974-b638-7ab52dcfec18"
	I0612 15:03:40.385242   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.494642    1517 topology_manager.go:215] "Topology Admit Handler" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-vgcxw"
	I0612 15:03:40.385242   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.494763    1517 topology_manager.go:215] "Topology Admit Handler" podUID="d20f7489-1aa1-44b8-9221-4d1849884be4" podNamespace="kube-system" podName="storage-provisioner"
	I0612 15:03:40.385330   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.494876    1517 topology_manager.go:215] "Topology Admit Handler" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4" podNamespace="default" podName="busybox-fc5497c4f-45qqd"
	I0612 15:03:40.385380   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.495127    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.385428   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.495306    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.385481   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.499353    1517 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0612 15:03:40.385481   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.541672    1517 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-025000"
	I0612 15:03:40.385481   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.557538    1517 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-025000"
	I0612 15:03:40.385481   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593012    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1f004a05-3f5f-444b-9ac0-88f0e23da904-cni-cfg\") pod \"kindnet-bqlg8\" (UID: \"1f004a05-3f5f-444b-9ac0-88f0e23da904\") " pod="kube-system/kindnet-bqlg8"
	I0612 15:03:40.385481   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593075    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10b24fa7-8eea-4fbb-ab18-404e853aa7ab-lib-modules\") pod \"kube-proxy-47lr8\" (UID: \"10b24fa7-8eea-4fbb-ab18-404e853aa7ab\") " pod="kube-system/kube-proxy-47lr8"
	I0612 15:03:40.385481   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593188    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f004a05-3f5f-444b-9ac0-88f0e23da904-lib-modules\") pod \"kindnet-bqlg8\" (UID: \"1f004a05-3f5f-444b-9ac0-88f0e23da904\") " pod="kube-system/kindnet-bqlg8"
	I0612 15:03:40.385481   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593684    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d20f7489-1aa1-44b8-9221-4d1849884be4-tmp\") pod \"storage-provisioner\" (UID: \"d20f7489-1aa1-44b8-9221-4d1849884be4\") " pod="kube-system/storage-provisioner"
	I0612 15:03:40.385481   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593711    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f004a05-3f5f-444b-9ac0-88f0e23da904-xtables-lock\") pod \"kindnet-bqlg8\" (UID: \"1f004a05-3f5f-444b-9ac0-88f0e23da904\") " pod="kube-system/kindnet-bqlg8"
	I0612 15:03:40.385481   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593752    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10b24fa7-8eea-4fbb-ab18-404e853aa7ab-xtables-lock\") pod \"kube-proxy-47lr8\" (UID: \"10b24fa7-8eea-4fbb-ab18-404e853aa7ab\") " pod="kube-system/kube-proxy-47lr8"
	I0612 15:03:40.385481   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.594460    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:40.385481   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.594613    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:02:32.094549489 +0000 UTC m=+6.763435539 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:40.385481   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.622682    1517 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04dcbc8e258f964f689941b6844769d9" path="/var/lib/kubelet/pods/04dcbc8e258f964f689941b6844769d9/volumes"
	I0612 15:03:40.385481   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.623801    1517 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="610414aa8160848c0b6b79ea0a700b83" path="/var/lib/kubelet/pods/610414aa8160848c0b6b79ea0a700b83/volumes"
	I0612 15:03:40.385481   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.626972    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.385481   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.627014    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.385481   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.627132    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:02:32.127114564 +0000 UTC m=+6.796000614 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.385481   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.673848    1517 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-025000" podStartSLOduration=0.673800971 podStartE2EDuration="673.800971ms" podCreationTimestamp="2024-06-12 22:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-12 22:02:31.632162175 +0000 UTC m=+6.301048225" watchObservedRunningTime="2024-06-12 22:02:31.673800971 +0000 UTC m=+6.342686921"
	I0612 15:03:40.386059   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.674234    1517 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-025000" podStartSLOduration=0.674226172 podStartE2EDuration="674.226172ms" podCreationTimestamp="2024-06-12 22:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-12 22:02:31.67337587 +0000 UTC m=+6.342261920" watchObservedRunningTime="2024-06-12 22:02:31.674226172 +0000 UTC m=+6.343112222"
	I0612 15:03:40.386059   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: E0612 22:02:32.099190    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:40.386059   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: E0612 22:02:32.099284    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:02:33.099266752 +0000 UTC m=+7.768152702 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:40.386059   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: E0612 22:02:32.199774    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.386174   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: E0612 22:02:32.199808    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.386212   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: E0612 22:02:32.199864    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:02:33.199845384 +0000 UTC m=+7.868731334 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.386268   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: I0612 22:02:32.394461    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5287b61207e62a3ec16408b08af503462a8bed945d441422fd0b733e752d6217"
	I0612 15:03:40.386302   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: I0612 22:02:32.774495    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a20975d81b350d77bb2d9d69d861d19ddbcbab33211643f61e2aaa0d6dc46a9d"
	I0612 15:03:40.386302   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: I0612 22:02:32.791274    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="435c56b0fbbbb46e4b392ac6467c2054ce16271a6b3dad2d53f747f839b4b3cd"
	I0612 15:03:40.386302   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.106313    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:40.386302   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.106394    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:02:35.106375874 +0000 UTC m=+9.775261924 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:40.386302   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.208318    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.386302   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.208375    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.386302   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.208431    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:02:35.208413609 +0000 UTC m=+9.877299559 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.386302   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.617822    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.386302   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.618103    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.386302   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.125562    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:40.386302   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.126376    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:02:39.12633293 +0000 UTC m=+13.795218980 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:40.386302   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.226548    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.386302   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.226607    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.386302   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.226693    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:02:39.226674161 +0000 UTC m=+13.895560111 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.386302   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.616712    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.386302   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.617047    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:40.386879   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.617270    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.386879   13752 command_runner.go:130] > Jun 12 22:02:37 multinode-025000 kubelet[1517]: E0612 22:02:37.618147    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.386988   13752 command_runner.go:130] > Jun 12 22:02:37 multinode-025000 kubelet[1517]: E0612 22:02:37.618607    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.386988   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.164650    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:40.387065   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.164956    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:02:47.164935524 +0000 UTC m=+21.833821574 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.265764    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.266004    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.266086    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:02:47.266062158 +0000 UTC m=+21.934948208 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.616548    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.617577    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:40 multinode-025000 kubelet[1517]: E0612 22:02:40.619032    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:41 multinode-025000 kubelet[1517]: E0612 22:02:41.617010    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:41 multinode-025000 kubelet[1517]: E0612 22:02:41.617816    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:43 multinode-025000 kubelet[1517]: E0612 22:02:43.617105    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:43 multinode-025000 kubelet[1517]: E0612 22:02:43.617755    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:45 multinode-025000 kubelet[1517]: E0612 22:02:45.617112    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:45 multinode-025000 kubelet[1517]: E0612 22:02:45.618034    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:45 multinode-025000 kubelet[1517]: E0612 22:02:45.621402    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.234271    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.234420    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:03:03.234402815 +0000 UTC m=+37.903288765 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.335532    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.335632    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.335696    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:03:03.33568009 +0000 UTC m=+38.004566140 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.387674   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.617048    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.387674   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.617530    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.387820   13752 command_runner.go:130] > Jun 12 22:02:49 multinode-025000 kubelet[1517]: E0612 22:02:49.617040    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.387859   13752 command_runner.go:130] > Jun 12 22:02:49 multinode-025000 kubelet[1517]: E0612 22:02:49.617673    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.387859   13752 command_runner.go:130] > Jun 12 22:02:50 multinode-025000 kubelet[1517]: E0612 22:02:50.623368    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:40.387859   13752 command_runner.go:130] > Jun 12 22:02:51 multinode-025000 kubelet[1517]: E0612 22:02:51.616848    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.387859   13752 command_runner.go:130] > Jun 12 22:02:51 multinode-025000 kubelet[1517]: E0612 22:02:51.617656    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.387859   13752 command_runner.go:130] > Jun 12 22:02:53 multinode-025000 kubelet[1517]: E0612 22:02:53.617130    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.387859   13752 command_runner.go:130] > Jun 12 22:02:53 multinode-025000 kubelet[1517]: E0612 22:02:53.617679    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.387859   13752 command_runner.go:130] > Jun 12 22:02:55 multinode-025000 kubelet[1517]: E0612 22:02:55.617082    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.387859   13752 command_runner.go:130] > Jun 12 22:02:55 multinode-025000 kubelet[1517]: E0612 22:02:55.617595    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.387859   13752 command_runner.go:130] > Jun 12 22:02:55 multinode-025000 kubelet[1517]: E0612 22:02:55.624795    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:40.387859   13752 command_runner.go:130] > Jun 12 22:02:57 multinode-025000 kubelet[1517]: E0612 22:02:57.617430    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.387859   13752 command_runner.go:130] > Jun 12 22:02:57 multinode-025000 kubelet[1517]: E0612 22:02:57.618180    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.387859   13752 command_runner.go:130] > Jun 12 22:02:59 multinode-025000 kubelet[1517]: E0612 22:02:59.616577    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.387859   13752 command_runner.go:130] > Jun 12 22:02:59 multinode-025000 kubelet[1517]: E0612 22:02:59.617339    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.387859   13752 command_runner.go:130] > Jun 12 22:03:00 multinode-025000 kubelet[1517]: E0612 22:03:00.626741    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:40.387859   13752 command_runner.go:130] > Jun 12 22:03:01 multinode-025000 kubelet[1517]: E0612 22:03:01.617176    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.387859   13752 command_runner.go:130] > Jun 12 22:03:01 multinode-025000 kubelet[1517]: E0612 22:03:01.617573    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.387859   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: I0612 22:03:03.236005    1517 scope.go:117] "RemoveContainer" containerID="61910369e0d4ba1a5246a686e904c168fc7467d239e475004146ddf2835e8e78"
	I0612 15:03:40.388473   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: I0612 22:03:03.236962    1517 scope.go:117] "RemoveContainer" containerID="3546a5c00321078fed32a806a318f4e56e89801ea54ea9463adf37f82327b38a"
	I0612 15:03:40.388795   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.239739    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d20f7489-1aa1-44b8-9221-4d1849884be4)\"" pod="kube-system/storage-provisioner" podUID="d20f7489-1aa1-44b8-9221-4d1849884be4"
	I0612 15:03:40.388795   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.284341    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:40.388795   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.284420    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:03:35.284401461 +0000 UTC m=+69.953287411 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:40.388795   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.385432    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.388795   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.385531    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.388795   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.385613    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:03:35.385594617 +0000 UTC m=+70.054480667 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.388795   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.616668    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.388795   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.617100    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.388795   13752 command_runner.go:130] > Jun 12 22:03:05 multinode-025000 kubelet[1517]: E0612 22:03:05.617214    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.388795   13752 command_runner.go:130] > Jun 12 22:03:05 multinode-025000 kubelet[1517]: E0612 22:03:05.617674    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.388795   13752 command_runner.go:130] > Jun 12 22:03:05 multinode-025000 kubelet[1517]: E0612 22:03:05.628542    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:40.388795   13752 command_runner.go:130] > Jun 12 22:03:07 multinode-025000 kubelet[1517]: E0612 22:03:07.616455    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.389396   13752 command_runner.go:130] > Jun 12 22:03:07 multinode-025000 kubelet[1517]: E0612 22:03:07.617581    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.389396   13752 command_runner.go:130] > Jun 12 22:03:09 multinode-025000 kubelet[1517]: E0612 22:03:09.617093    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.389396   13752 command_runner.go:130] > Jun 12 22:03:09 multinode-025000 kubelet[1517]: E0612 22:03:09.617405    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.389531   13752 command_runner.go:130] > Jun 12 22:03:13 multinode-025000 kubelet[1517]: I0612 22:03:13.617647    1517 scope.go:117] "RemoveContainer" containerID="3546a5c00321078fed32a806a318f4e56e89801ea54ea9463adf37f82327b38a"
	I0612 15:03:40.389531   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]: I0612 22:03:25.637114    1517 scope.go:117] "RemoveContainer" containerID="0749f44d03561395230c8a60a41853a49502741bf3bcd45acc924d346061f5b0"
	I0612 15:03:40.389570   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]: E0612 22:03:25.663119    1517 iptables.go:577] "Could not set up iptables canary" err=<
	I0612 15:03:40.389570   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0612 15:03:40.389612   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0612 15:03:40.389648   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0612 15:03:40.389697   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0612 15:03:40.389697   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]: I0612 22:03:25.699754    1517 scope.go:117] "RemoveContainer" containerID="2455f315465b9508a3fe1025d7150342eedb3cb09eb5f8fd9b2cbbffe1306db0"
	I0612 15:03:40.431275   13752 logs.go:123] Gathering logs for kube-scheduler [755750ecd1e3] ...
	I0612 15:03:40.431275   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 755750ecd1e3"
	I0612 15:03:40.456107   13752 command_runner.go:130] ! I0612 22:02:28.771072       1 serving.go:380] Generated self-signed cert in-memory
	I0612 15:03:40.460394   13752 command_runner.go:130] ! W0612 22:02:31.003959       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0612 15:03:40.460394   13752 command_runner.go:130] ! W0612 22:02:31.004072       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:40.460516   13752 command_runner.go:130] ! W0612 22:02:31.004087       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0612 15:03:40.460516   13752 command_runner.go:130] ! W0612 22:02:31.004098       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0612 15:03:40.460619   13752 command_runner.go:130] ! I0612 22:02:31.034273       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0612 15:03:40.460668   13752 command_runner.go:130] ! I0612 22:02:31.034440       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:40.460668   13752 command_runner.go:130] ! I0612 22:02:31.039288       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0612 15:03:40.460748   13752 command_runner.go:130] ! I0612 22:02:31.039331       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0612 15:03:40.460748   13752 command_runner.go:130] ! I0612 22:02:31.039699       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0612 15:03:40.460800   13752 command_runner.go:130] ! I0612 22:02:31.040018       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 15:03:40.460841   13752 command_runner.go:130] ! I0612 22:02:31.139849       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0612 15:03:40.463106   13752 logs.go:123] Gathering logs for kube-proxy [c4842faba751] ...
	I0612 15:03:40.463180   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4842faba751"
	I0612 15:03:40.488290   13752 command_runner.go:130] ! I0612 21:39:47.407607       1 server_linux.go:69] "Using iptables proxy"
	I0612 15:03:40.488290   13752 command_runner.go:130] ! I0612 21:39:47.423801       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.198.154"]
	I0612 15:03:40.488290   13752 command_runner.go:130] ! I0612 21:39:47.480061       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 15:03:40.488290   13752 command_runner.go:130] ! I0612 21:39:47.480182       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 15:03:40.488290   13752 command_runner.go:130] ! I0612 21:39:47.480205       1 server_linux.go:165] "Using iptables Proxier"
	I0612 15:03:40.488290   13752 command_runner.go:130] ! I0612 21:39:47.484521       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 15:03:40.488290   13752 command_runner.go:130] ! I0612 21:39:47.485171       1 server.go:872] "Version info" version="v1.30.1"
	I0612 15:03:40.488290   13752 command_runner.go:130] ! I0612 21:39:47.485535       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:40.488290   13752 command_runner.go:130] ! I0612 21:39:47.488126       1 config.go:192] "Starting service config controller"
	I0612 15:03:40.488290   13752 command_runner.go:130] ! I0612 21:39:47.488162       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 15:03:40.488290   13752 command_runner.go:130] ! I0612 21:39:47.488188       1 config.go:101] "Starting endpoint slice config controller"
	I0612 15:03:40.488290   13752 command_runner.go:130] ! I0612 21:39:47.488197       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 15:03:40.488290   13752 command_runner.go:130] ! I0612 21:39:47.488969       1 config.go:319] "Starting node config controller"
	I0612 15:03:40.488290   13752 command_runner.go:130] ! I0612 21:39:47.489001       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 15:03:40.488290   13752 command_runner.go:130] ! I0612 21:39:47.588500       1 shared_informer.go:320] Caches are synced for service config
	I0612 15:03:40.488290   13752 command_runner.go:130] ! I0612 21:39:47.588641       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0612 15:03:40.488290   13752 command_runner.go:130] ! I0612 21:39:47.589226       1 shared_informer.go:320] Caches are synced for node config
	I0612 15:03:40.492327   13752 logs.go:123] Gathering logs for kindnet [4d60d82f6bc5] ...
	I0612 15:03:40.492991   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d60d82f6bc5"
	I0612 15:03:40.536992   13752 command_runner.go:130] ! I0612 21:48:53.982546       1 main.go:227] handling current node
	I0612 15:03:40.537093   13752 command_runner.go:130] ! I0612 21:48:53.982561       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.537093   13752 command_runner.go:130] ! I0612 21:48:53.982568       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.537093   13752 command_runner.go:130] ! I0612 21:48:53.982982       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.537190   13752 command_runner.go:130] ! I0612 21:48:53.983049       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.537190   13752 command_runner.go:130] ! I0612 21:49:03.989649       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.537190   13752 command_runner.go:130] ! I0612 21:49:03.989791       1 main.go:227] handling current node
	I0612 15:03:40.537294   13752 command_runner.go:130] ! I0612 21:49:03.989809       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.537294   13752 command_runner.go:130] ! I0612 21:49:03.989817       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.537294   13752 command_runner.go:130] ! I0612 21:49:03.990195       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.537294   13752 command_runner.go:130] ! I0612 21:49:03.990415       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.537383   13752 command_runner.go:130] ! I0612 21:49:14.000384       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.537383   13752 command_runner.go:130] ! I0612 21:49:14.000493       1 main.go:227] handling current node
	I0612 15:03:40.537383   13752 command_runner.go:130] ! I0612 21:49:14.000507       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.537383   13752 command_runner.go:130] ! I0612 21:49:14.000513       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.537383   13752 command_runner.go:130] ! I0612 21:49:14.000627       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.537467   13752 command_runner.go:130] ! I0612 21:49:14.000640       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.537467   13752 command_runner.go:130] ! I0612 21:49:24.006829       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.537467   13752 command_runner.go:130] ! I0612 21:49:24.006871       1 main.go:227] handling current node
	I0612 15:03:40.537467   13752 command_runner.go:130] ! I0612 21:49:24.006883       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.537467   13752 command_runner.go:130] ! I0612 21:49:24.006889       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.537467   13752 command_runner.go:130] ! I0612 21:49:24.007645       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.537559   13752 command_runner.go:130] ! I0612 21:49:24.007745       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.537559   13752 command_runner.go:130] ! I0612 21:49:34.016679       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.537559   13752 command_runner.go:130] ! I0612 21:49:34.016806       1 main.go:227] handling current node
	I0612 15:03:40.537648   13752 command_runner.go:130] ! I0612 21:49:34.016838       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.537648   13752 command_runner.go:130] ! I0612 21:49:34.016845       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.537648   13752 command_runner.go:130] ! I0612 21:49:34.017149       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.537648   13752 command_runner.go:130] ! I0612 21:49:34.017279       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.537648   13752 command_runner.go:130] ! I0612 21:49:44.025835       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.537648   13752 command_runner.go:130] ! I0612 21:49:44.025933       1 main.go:227] handling current node
	I0612 15:03:40.537737   13752 command_runner.go:130] ! I0612 21:49:44.025947       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.537737   13752 command_runner.go:130] ! I0612 21:49:44.025955       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.537737   13752 command_runner.go:130] ! I0612 21:49:44.026381       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.537737   13752 command_runner.go:130] ! I0612 21:49:44.026533       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.537737   13752 command_runner.go:130] ! I0612 21:49:54.033148       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.537737   13752 command_runner.go:130] ! I0612 21:49:54.033257       1 main.go:227] handling current node
	I0612 15:03:40.537821   13752 command_runner.go:130] ! I0612 21:49:54.033273       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.537821   13752 command_runner.go:130] ! I0612 21:49:54.033281       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.537821   13752 command_runner.go:130] ! I0612 21:49:54.033402       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.537821   13752 command_runner.go:130] ! I0612 21:49:54.033435       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.537997   13752 command_runner.go:130] ! I0612 21:50:04.046279       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.538337   13752 command_runner.go:130] ! I0612 21:50:04.046719       1 main.go:227] handling current node
	I0612 15:03:40.538337   13752 command_runner.go:130] ! I0612 21:50:04.046832       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.538408   13752 command_runner.go:130] ! I0612 21:50:04.047109       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.538408   13752 command_runner.go:130] ! I0612 21:50:04.047537       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.538408   13752 command_runner.go:130] ! I0612 21:50:04.047572       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.538408   13752 command_runner.go:130] ! I0612 21:50:14.064171       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.538408   13752 command_runner.go:130] ! I0612 21:50:14.064216       1 main.go:227] handling current node
	I0612 15:03:40.538408   13752 command_runner.go:130] ! I0612 21:50:14.064230       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.538954   13752 command_runner.go:130] ! I0612 21:50:14.064236       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.538954   13752 command_runner.go:130] ! I0612 21:50:14.064574       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.539057   13752 command_runner.go:130] ! I0612 21:50:14.064665       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.539099   13752 command_runner.go:130] ! I0612 21:50:24.071894       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.539099   13752 command_runner.go:130] ! I0612 21:50:24.071935       1 main.go:227] handling current node
	I0612 15:03:40.539168   13752 command_runner.go:130] ! I0612 21:50:24.071949       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.539213   13752 command_runner.go:130] ! I0612 21:50:24.071955       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.539213   13752 command_runner.go:130] ! I0612 21:50:24.072148       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.539213   13752 command_runner.go:130] ! I0612 21:50:24.072184       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.539213   13752 command_runner.go:130] ! I0612 21:50:34.086428       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.539213   13752 command_runner.go:130] ! I0612 21:50:34.086522       1 main.go:227] handling current node
	I0612 15:03:40.539213   13752 command_runner.go:130] ! I0612 21:50:34.086536       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.539213   13752 command_runner.go:130] ! I0612 21:50:34.086543       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.539213   13752 command_runner.go:130] ! I0612 21:50:34.086690       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.539213   13752 command_runner.go:130] ! I0612 21:50:34.086707       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.539213   13752 command_runner.go:130] ! I0612 21:50:44.093862       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.539213   13752 command_runner.go:130] ! I0612 21:50:44.093905       1 main.go:227] handling current node
	I0612 15:03:40.539213   13752 command_runner.go:130] ! I0612 21:50:44.093919       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.539213   13752 command_runner.go:130] ! I0612 21:50:44.093925       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.539213   13752 command_runner.go:130] ! I0612 21:50:44.094840       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.539213   13752 command_runner.go:130] ! I0612 21:50:44.094916       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.539213   13752 command_runner.go:130] ! I0612 21:50:54.102869       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.541073   13752 command_runner.go:130] ! I0612 21:50:54.103074       1 main.go:227] handling current node
	I0612 15:03:40.541355   13752 command_runner.go:130] ! I0612 21:50:54.103091       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.541459   13752 command_runner.go:130] ! I0612 21:50:54.103100       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.541459   13752 command_runner.go:130] ! I0612 21:50:54.103237       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.541459   13752 command_runner.go:130] ! I0612 21:50:54.103276       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.541582   13752 command_runner.go:130] ! I0612 21:51:04.110391       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.541582   13752 command_runner.go:130] ! I0612 21:51:04.110501       1 main.go:227] handling current node
	I0612 15:03:40.541704   13752 command_runner.go:130] ! I0612 21:51:04.110517       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.541704   13752 command_runner.go:130] ! I0612 21:51:04.110556       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.541704   13752 command_runner.go:130] ! I0612 21:51:04.110721       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.541704   13752 command_runner.go:130] ! I0612 21:51:04.110794       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.541704   13752 command_runner.go:130] ! I0612 21:51:14.121126       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.541704   13752 command_runner.go:130] ! I0612 21:51:14.121263       1 main.go:227] handling current node
	I0612 15:03:40.541829   13752 command_runner.go:130] ! I0612 21:51:14.121280       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.541829   13752 command_runner.go:130] ! I0612 21:51:14.121288       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.541829   13752 command_runner.go:130] ! I0612 21:51:14.121430       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.541829   13752 command_runner.go:130] ! I0612 21:51:14.121462       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.541953   13752 command_runner.go:130] ! I0612 21:51:24.131659       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.542570   13752 command_runner.go:130] ! I0612 21:51:24.131690       1 main.go:227] handling current node
	I0612 15:03:40.542570   13752 command_runner.go:130] ! I0612 21:51:24.131702       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.542570   13752 command_runner.go:130] ! I0612 21:51:24.131708       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.542704   13752 command_runner.go:130] ! I0612 21:51:24.132287       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.542704   13752 command_runner.go:130] ! I0612 21:51:24.132319       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.542704   13752 command_runner.go:130] ! I0612 21:51:34.139419       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.542704   13752 command_runner.go:130] ! I0612 21:51:34.139546       1 main.go:227] handling current node
	I0612 15:03:40.542704   13752 command_runner.go:130] ! I0612 21:51:34.139561       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.542704   13752 command_runner.go:130] ! I0612 21:51:34.139570       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.542817   13752 command_runner.go:130] ! I0612 21:51:34.140149       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.542817   13752 command_runner.go:130] ! I0612 21:51:34.140253       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.542864   13752 command_runner.go:130] ! I0612 21:51:44.152295       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.542864   13752 command_runner.go:130] ! I0612 21:51:44.152430       1 main.go:227] handling current node
	I0612 15:03:40.542892   13752 command_runner.go:130] ! I0612 21:51:44.152464       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.542909   13752 command_runner.go:130] ! I0612 21:51:44.152471       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.542909   13752 command_runner.go:130] ! I0612 21:51:44.153262       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.542909   13752 command_runner.go:130] ! I0612 21:51:44.153471       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.542909   13752 command_runner.go:130] ! I0612 21:51:54.160684       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.542984   13752 command_runner.go:130] ! I0612 21:51:54.160938       1 main.go:227] handling current node
	I0612 15:03:40.542984   13752 command_runner.go:130] ! I0612 21:51:54.160953       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.542984   13752 command_runner.go:130] ! I0612 21:51:54.160960       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.542984   13752 command_runner.go:130] ! I0612 21:51:54.161457       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.542984   13752 command_runner.go:130] ! I0612 21:51:54.161482       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.543084   13752 command_runner.go:130] ! I0612 21:52:04.170421       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.543084   13752 command_runner.go:130] ! I0612 21:52:04.170526       1 main.go:227] handling current node
	I0612 15:03:40.543172   13752 command_runner.go:130] ! I0612 21:52:04.170541       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.543172   13752 command_runner.go:130] ! I0612 21:52:04.170548       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.543172   13752 command_runner.go:130] ! I0612 21:52:04.171076       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.543172   13752 command_runner.go:130] ! I0612 21:52:04.171113       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.543172   13752 command_runner.go:130] ! I0612 21:52:14.180403       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.543172   13752 command_runner.go:130] ! I0612 21:52:14.180490       1 main.go:227] handling current node
	I0612 15:03:40.543172   13752 command_runner.go:130] ! I0612 21:52:14.180508       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.543172   13752 command_runner.go:130] ! I0612 21:52:14.180516       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.543172   13752 command_runner.go:130] ! I0612 21:52:14.180994       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.543394   13752 command_runner.go:130] ! I0612 21:52:14.181032       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.543394   13752 command_runner.go:130] ! I0612 21:52:24.195314       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.543394   13752 command_runner.go:130] ! I0612 21:52:24.195545       1 main.go:227] handling current node
	I0612 15:03:40.543394   13752 command_runner.go:130] ! I0612 21:52:24.195735       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.543394   13752 command_runner.go:130] ! I0612 21:52:24.195807       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.543394   13752 command_runner.go:130] ! I0612 21:52:24.196026       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.543518   13752 command_runner.go:130] ! I0612 21:52:24.196064       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.543518   13752 command_runner.go:130] ! I0612 21:52:34.202013       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.543518   13752 command_runner.go:130] ! I0612 21:52:34.202806       1 main.go:227] handling current node
	I0612 15:03:40.543518   13752 command_runner.go:130] ! I0612 21:52:34.202932       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.543518   13752 command_runner.go:130] ! I0612 21:52:34.203029       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.543518   13752 command_runner.go:130] ! I0612 21:52:34.203265       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.543518   13752 command_runner.go:130] ! I0612 21:52:34.203299       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.543610   13752 command_runner.go:130] ! I0612 21:52:44.209271       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.543610   13752 command_runner.go:130] ! I0612 21:52:44.209440       1 main.go:227] handling current node
	I0612 15:03:40.543610   13752 command_runner.go:130] ! I0612 21:52:44.209476       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.543610   13752 command_runner.go:130] ! I0612 21:52:44.209546       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.543610   13752 command_runner.go:130] ! I0612 21:52:44.209839       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.543708   13752 command_runner.go:130] ! I0612 21:52:44.210283       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.543743   13752 command_runner.go:130] ! I0612 21:52:54.223351       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.543743   13752 command_runner.go:130] ! I0612 21:52:54.223443       1 main.go:227] handling current node
	I0612 15:03:40.543793   13752 command_runner.go:130] ! I0612 21:52:54.223459       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.543793   13752 command_runner.go:130] ! I0612 21:52:54.223466       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.543828   13752 command_runner.go:130] ! I0612 21:52:54.223810       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.543828   13752 command_runner.go:130] ! I0612 21:52:54.223840       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.543828   13752 command_runner.go:130] ! I0612 21:53:04.236876       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.543877   13752 command_runner.go:130] ! I0612 21:53:04.237155       1 main.go:227] handling current node
	I0612 15:03:40.543877   13752 command_runner.go:130] ! I0612 21:53:04.237949       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.543911   13752 command_runner.go:130] ! I0612 21:53:04.238341       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.543933   13752 command_runner.go:130] ! I0612 21:53:04.238673       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.543933   13752 command_runner.go:130] ! I0612 21:53:04.238707       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.543969   13752 command_runner.go:130] ! I0612 21:53:14.245069       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.543969   13752 command_runner.go:130] ! I0612 21:53:14.245110       1 main.go:227] handling current node
	I0612 15:03:40.543969   13752 command_runner.go:130] ! I0612 21:53:14.245122       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.543969   13752 command_runner.go:130] ! I0612 21:53:14.245131       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.543969   13752 command_runner.go:130] ! I0612 21:53:14.245834       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.543969   13752 command_runner.go:130] ! I0612 21:53:14.245932       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.544080   13752 command_runner.go:130] ! I0612 21:53:24.258923       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.544080   13752 command_runner.go:130] ! I0612 21:53:24.258965       1 main.go:227] handling current node
	I0612 15:03:40.544080   13752 command_runner.go:130] ! I0612 21:53:24.258977       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.544080   13752 command_runner.go:130] ! I0612 21:53:24.258983       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.544080   13752 command_runner.go:130] ! I0612 21:53:24.259367       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.544156   13752 command_runner.go:130] ! I0612 21:53:24.259399       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.544156   13752 command_runner.go:130] ! I0612 21:53:34.265573       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.544156   13752 command_runner.go:130] ! I0612 21:53:34.265738       1 main.go:227] handling current node
	I0612 15:03:40.544156   13752 command_runner.go:130] ! I0612 21:53:34.265787       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.544156   13752 command_runner.go:130] ! I0612 21:53:34.265797       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.544231   13752 command_runner.go:130] ! I0612 21:53:34.266180       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.544231   13752 command_runner.go:130] ! I0612 21:53:34.266257       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.544231   13752 command_runner.go:130] ! I0612 21:53:44.278968       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.544231   13752 command_runner.go:130] ! I0612 21:53:44.279173       1 main.go:227] handling current node
	I0612 15:03:40.544231   13752 command_runner.go:130] ! I0612 21:53:44.279207       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.544231   13752 command_runner.go:130] ! I0612 21:53:44.279294       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.544334   13752 command_runner.go:130] ! I0612 21:53:44.279698       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.544334   13752 command_runner.go:130] ! I0612 21:53:44.279829       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.544334   13752 command_runner.go:130] ! I0612 21:53:54.290366       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.544334   13752 command_runner.go:130] ! I0612 21:53:54.290472       1 main.go:227] handling current node
	I0612 15:03:40.544334   13752 command_runner.go:130] ! I0612 21:53:54.290487       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.544334   13752 command_runner.go:130] ! I0612 21:53:54.290494       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.544414   13752 command_runner.go:130] ! I0612 21:53:54.291158       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.544414   13752 command_runner.go:130] ! I0612 21:53:54.291263       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.544414   13752 command_runner.go:130] ! I0612 21:54:04.308014       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.544414   13752 command_runner.go:130] ! I0612 21:54:04.308117       1 main.go:227] handling current node
	I0612 15:03:40.544497   13752 command_runner.go:130] ! I0612 21:54:04.308133       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.544497   13752 command_runner.go:130] ! I0612 21:54:04.308142       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.544497   13752 command_runner.go:130] ! I0612 21:54:04.308605       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.544497   13752 command_runner.go:130] ! I0612 21:54:04.308643       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.544575   13752 command_runner.go:130] ! I0612 21:54:14.316271       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.544575   13752 command_runner.go:130] ! I0612 21:54:14.316380       1 main.go:227] handling current node
	I0612 15:03:40.544575   13752 command_runner.go:130] ! I0612 21:54:14.316396       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.544575   13752 command_runner.go:130] ! I0612 21:54:14.316403       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.544575   13752 command_runner.go:130] ! I0612 21:54:14.316942       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.544575   13752 command_runner.go:130] ! I0612 21:54:14.316959       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.544575   13752 command_runner.go:130] ! I0612 21:54:24.330853       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.544659   13752 command_runner.go:130] ! I0612 21:54:24.331009       1 main.go:227] handling current node
	I0612 15:03:40.544659   13752 command_runner.go:130] ! I0612 21:54:24.331025       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.544659   13752 command_runner.go:130] ! I0612 21:54:24.331033       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.544659   13752 command_runner.go:130] ! I0612 21:54:24.331178       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.544659   13752 command_runner.go:130] ! I0612 21:54:24.331213       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.544738   13752 command_runner.go:130] ! I0612 21:54:34.340396       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.544738   13752 command_runner.go:130] ! I0612 21:54:34.340543       1 main.go:227] handling current node
	I0612 15:03:40.544738   13752 command_runner.go:130] ! I0612 21:54:34.340558       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.544738   13752 command_runner.go:130] ! I0612 21:54:34.340565       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.544738   13752 command_runner.go:130] ! I0612 21:54:34.340924       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.544738   13752 command_runner.go:130] ! I0612 21:54:34.341013       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.544815   13752 command_runner.go:130] ! I0612 21:54:44.347468       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.544815   13752 command_runner.go:130] ! I0612 21:54:44.347599       1 main.go:227] handling current node
	I0612 15:03:40.544815   13752 command_runner.go:130] ! I0612 21:54:44.347614       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.544815   13752 command_runner.go:130] ! I0612 21:54:44.347622       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.544815   13752 command_runner.go:130] ! I0612 21:54:44.348279       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.544893   13752 command_runner.go:130] ! I0612 21:54:44.348396       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.544893   13752 command_runner.go:130] ! I0612 21:54:54.364900       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.544893   13752 command_runner.go:130] ! I0612 21:54:54.365031       1 main.go:227] handling current node
	I0612 15:03:40.544893   13752 command_runner.go:130] ! I0612 21:54:54.365046       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.544893   13752 command_runner.go:130] ! I0612 21:54:54.365054       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.544893   13752 command_runner.go:130] ! I0612 21:54:54.365542       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.544992   13752 command_runner.go:130] ! I0612 21:54:54.365727       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.544992   13752 command_runner.go:130] ! I0612 21:55:04.381041       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.544992   13752 command_runner.go:130] ! I0612 21:55:04.381087       1 main.go:227] handling current node
	I0612 15:03:40.544992   13752 command_runner.go:130] ! I0612 21:55:04.381103       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.544992   13752 command_runner.go:130] ! I0612 21:55:04.381110       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.545112   13752 command_runner.go:130] ! I0612 21:55:04.381700       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.545112   13752 command_runner.go:130] ! I0612 21:55:04.381853       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.545112   13752 command_runner.go:130] ! I0612 21:55:14.395619       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.545112   13752 command_runner.go:130] ! I0612 21:55:14.395666       1 main.go:227] handling current node
	I0612 15:03:40.545112   13752 command_runner.go:130] ! I0612 21:55:14.395679       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.545112   13752 command_runner.go:130] ! I0612 21:55:14.395686       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.545206   13752 command_runner.go:130] ! I0612 21:55:14.396514       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.545206   13752 command_runner.go:130] ! I0612 21:55:14.396536       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.545206   13752 command_runner.go:130] ! I0612 21:55:24.411927       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.545206   13752 command_runner.go:130] ! I0612 21:55:24.412012       1 main.go:227] handling current node
	I0612 15:03:40.545206   13752 command_runner.go:130] ! I0612 21:55:24.412028       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.545206   13752 command_runner.go:130] ! I0612 21:55:24.412036       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.545306   13752 command_runner.go:130] ! I0612 21:55:24.412568       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.545306   13752 command_runner.go:130] ! I0612 21:55:24.412661       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.545306   13752 command_runner.go:130] ! I0612 21:55:34.420011       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.545306   13752 command_runner.go:130] ! I0612 21:55:34.420100       1 main.go:227] handling current node
	I0612 15:03:40.545306   13752 command_runner.go:130] ! I0612 21:55:34.420115       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.545306   13752 command_runner.go:130] ! I0612 21:55:34.420122       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.545395   13752 command_runner.go:130] ! I0612 21:55:34.420481       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.545395   13752 command_runner.go:130] ! I0612 21:55:34.420570       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.545395   13752 command_runner.go:130] ! I0612 21:55:44.432502       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.545395   13752 command_runner.go:130] ! I0612 21:55:44.432604       1 main.go:227] handling current node
	I0612 15:03:40.545395   13752 command_runner.go:130] ! I0612 21:55:44.432620       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.545395   13752 command_runner.go:130] ! I0612 21:55:44.432632       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.545395   13752 command_runner.go:130] ! I0612 21:55:44.432881       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.545395   13752 command_runner.go:130] ! I0612 21:55:44.433061       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.545480   13752 command_runner.go:130] ! I0612 21:55:54.446991       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.545480   13752 command_runner.go:130] ! I0612 21:55:54.447440       1 main.go:227] handling current node
	I0612 15:03:40.545480   13752 command_runner.go:130] ! I0612 21:55:54.447622       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.545480   13752 command_runner.go:130] ! I0612 21:55:54.447655       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.545480   13752 command_runner.go:130] ! I0612 21:55:54.447830       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.545480   13752 command_runner.go:130] ! I0612 21:55:54.447901       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.545480   13752 command_runner.go:130] ! I0612 21:56:04.463393       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.545480   13752 command_runner.go:130] ! I0612 21:56:04.463546       1 main.go:227] handling current node
	I0612 15:03:40.545574   13752 command_runner.go:130] ! I0612 21:56:04.463575       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.545574   13752 command_runner.go:130] ! I0612 21:56:04.463596       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.545574   13752 command_runner.go:130] ! I0612 21:56:04.463900       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.545574   13752 command_runner.go:130] ! I0612 21:56:04.463932       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.545574   13752 command_runner.go:130] ! I0612 21:56:14.477690       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.545574   13752 command_runner.go:130] ! I0612 21:56:14.477837       1 main.go:227] handling current node
	I0612 15:03:40.545574   13752 command_runner.go:130] ! I0612 21:56:14.477852       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.545574   13752 command_runner.go:130] ! I0612 21:56:14.477860       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.545574   13752 command_runner.go:130] ! I0612 21:56:14.478029       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.545662   13752 command_runner.go:130] ! I0612 21:56:14.478096       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.545662   13752 command_runner.go:130] ! I0612 21:56:24.485525       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.545662   13752 command_runner.go:130] ! I0612 21:56:24.485620       1 main.go:227] handling current node
	I0612 15:03:40.545662   13752 command_runner.go:130] ! I0612 21:56:24.485655       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.545662   13752 command_runner.go:130] ! I0612 21:56:24.485663       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.547393   13752 command_runner.go:130] ! I0612 21:56:24.486202       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.547514   13752 command_runner.go:130] ! I0612 21:56:24.486237       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.547545   13752 command_runner.go:130] ! I0612 21:56:34.502904       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.547545   13752 command_runner.go:130] ! I0612 21:56:34.502951       1 main.go:227] handling current node
	I0612 15:03:40.547584   13752 command_runner.go:130] ! I0612 21:56:34.502964       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.547584   13752 command_runner.go:130] ! I0612 21:56:34.502970       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.547584   13752 command_runner.go:130] ! I0612 21:56:34.503088       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:56:34.503684       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:56:44.512292       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:56:44.512356       1 main.go:227] handling current node
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:56:44.512368       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:56:44.512374       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:56:44.512909       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:56:44.513033       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:56:54.520903       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:56:54.521017       1 main.go:227] handling current node
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:56:54.521034       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:56:54.521041       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:56:54.521441       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:56:54.521665       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:04.535531       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:04.535625       1 main.go:227] handling current node
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:04.535665       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:04.535672       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:04.536272       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:04.536355       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:14.559304       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:14.559354       1 main.go:227] handling current node
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:14.559375       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:14.559382       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:14.559735       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:14.560332       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:24.568057       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:24.568103       1 main.go:227] handling current node
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:24.568116       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:24.568122       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:24.568938       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:24.569042       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:34.584121       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:34.584277       1 main.go:227] handling current node
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:34.584502       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.548165   13752 command_runner.go:130] ! I0612 21:57:34.584607       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.548165   13752 command_runner.go:130] ! I0612 21:57:34.584995       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.548165   13752 command_runner.go:130] ! I0612 21:57:34.585095       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.548215   13752 command_runner.go:130] ! I0612 21:57:44.600201       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.548215   13752 command_runner.go:130] ! I0612 21:57:44.600339       1 main.go:227] handling current node
	I0612 15:03:40.548215   13752 command_runner.go:130] ! I0612 21:57:44.600353       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.548256   13752 command_runner.go:130] ! I0612 21:57:44.600361       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.548256   13752 command_runner.go:130] ! I0612 21:57:44.600842       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.548300   13752 command_runner.go:130] ! I0612 21:57:44.600859       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.548300   13752 command_runner.go:130] ! I0612 21:57:54.615436       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.548300   13752 command_runner.go:130] ! I0612 21:57:54.615497       1 main.go:227] handling current node
	I0612 15:03:40.548339   13752 command_runner.go:130] ! I0612 21:57:54.615511       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:57:54.615536       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:04.629487       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:04.629657       1 main.go:227] handling current node
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:04.629797       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:04.629891       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:04.630131       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:04.631059       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:04.631221       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.23.206.72 Flags: [] Table: 0} 
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:14.647500       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:14.647527       1 main.go:227] handling current node
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:14.647539       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:14.647544       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:14.647661       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:14.647672       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:24.655905       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:24.656017       1 main.go:227] handling current node
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:24.656064       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:24.656140       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:24.656636       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:24.656721       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:34.670254       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:34.670590       1 main.go:227] handling current node
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:34.670966       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:34.671845       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:34.672269       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:34.672369       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:44.682684       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:44.682854       1 main.go:227] handling current node
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:44.682877       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:44.682887       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:44.683737       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:44.683808       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:54.691077       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:54.691167       1 main.go:227] handling current node
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:54.691199       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:54.691207       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:54.691344       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:54.691357       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:59:04.700863       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:59:04.701017       1 main.go:227] handling current node
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:59:04.701032       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:59:04.701040       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.548982   13752 command_runner.go:130] ! I0612 21:59:04.701620       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:40.548982   13752 command_runner.go:130] ! I0612 21:59:04.701736       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:40.548982   13752 command_runner.go:130] ! I0612 21:59:14.717668       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.548982   13752 command_runner.go:130] ! I0612 21:59:14.717949       1 main.go:227] handling current node
	I0612 15:03:40.549054   13752 command_runner.go:130] ! I0612 21:59:14.717991       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.549054   13752 command_runner.go:130] ! I0612 21:59:14.718050       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:14.718200       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:14.718263       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:24.724311       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:24.724441       1 main.go:227] handling current node
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:24.724456       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:24.724464       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:24.724785       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:24.724853       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:34.737266       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:34.737410       1 main.go:227] handling current node
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:34.737425       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:34.737432       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:34.738157       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:34.738269       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:44.746123       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:44.746292       1 main.go:227] handling current node
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:44.746313       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:44.746332       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:44.746856       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:44.746925       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:54.752611       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:54.752658       1 main.go:227] handling current node
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:54.752671       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:54.752678       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:54.753183       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:54.753277       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:40.560569   13752 logs.go:123] Gathering logs for etcd [6b61f5f6483d] ...
	I0612 15:03:40.560569   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61f5f6483d"
	I0612 15:03:40.585934   13752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-12T22:02:27.594582Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0612 15:03:40.592338   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.595941Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.23.200.184:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.23.200.184:2380","--initial-cluster=multinode-025000=https://172.23.200.184:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.23.200.184:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.23.200.184:2380","--name=multinode-025000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0612 15:03:40.592386   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.596165Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0612 15:03:40.592386   13752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-12T22:02:27.596271Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0612 15:03:40.592452   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.596356Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.23.200.184:2380"]}
	I0612 15:03:40.592498   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.596492Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0612 15:03:40.592498   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.611167Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.23.200.184:2379"]}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.613093Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-025000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.23.200.184:2380"],"listen-peer-urls":["https://172.23.200.184:2380"],"advertise-client-urls":["https://172.23.200.184:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.23.200.184:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"in
itial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.643295Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"27.151363ms"}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.674268Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.702241Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"a7fa2563dcb4b7b8","local-member-id":"b93ef5bd064a9684","commit-index":2039}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.702551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 switched to configuration voters=()"}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.702585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 became follower at term 2"}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.70261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft b93ef5bd064a9684 [peers: [], term: 2, commit: 2039, applied: 0, lastindex: 2039, lastterm: 2]"}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-12T22:02:27.719372Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.724082Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1403}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.735755Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1769}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.743333Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.753311Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"b93ef5bd064a9684","timeout":"7s"}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.755587Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"b93ef5bd064a9684"}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.755671Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"b93ef5bd064a9684","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.758078Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.758939Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.759011Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.759115Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.759495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 switched to configuration voters=(13348376537775904388)"}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.759589Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a7fa2563dcb4b7b8","local-member-id":"b93ef5bd064a9684","added-peer-id":"b93ef5bd064a9684","added-peer-peer-urls":["https://172.23.198.154:2380"]}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.760197Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a7fa2563dcb4b7b8","local-member-id":"b93ef5bd064a9684","cluster-version":"3.5"}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.761198Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.764395Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0612 15:03:40.593192   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.765492Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b93ef5bd064a9684","initial-advertise-peer-urls":["https://172.23.200.184:2380"],"listen-peer-urls":["https://172.23.200.184:2380"],"advertise-client-urls":["https://172.23.200.184:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.23.200.184:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0612 15:03:40.593192   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.766195Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0612 15:03:40.593246   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.766744Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.23.200.184:2380"}
	I0612 15:03:40.593246   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.767384Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.23.200.184:2380"}
	I0612 15:03:40.593246   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503194Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 is starting a new election at term 2"}
	I0612 15:03:40.593246   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.50332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 became pre-candidate at term 2"}
	I0612 15:03:40.593246   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503351Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 received MsgPreVoteResp from b93ef5bd064a9684 at term 2"}
	I0612 15:03:40.593350   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 became candidate at term 3"}
	I0612 15:03:40.593350   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 received MsgVoteResp from b93ef5bd064a9684 at term 3"}
	I0612 15:03:40.593395   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 became leader at term 3"}
	I0612 15:03:40.593395   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b93ef5bd064a9684 elected leader b93ef5bd064a9684 at term 3"}
	I0612 15:03:40.593395   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.511068Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0612 15:03:40.593445   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.511381Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0612 15:03:40.593488   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.511069Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"b93ef5bd064a9684","local-member-attributes":"{Name:multinode-025000 ClientURLs:[https://172.23.200.184:2379]}","request-path":"/0/members/b93ef5bd064a9684/attributes","cluster-id":"a7fa2563dcb4b7b8","publish-timeout":"7s"}
	I0612 15:03:40.593555   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.512996Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0612 15:03:40.593555   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.513243Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0612 15:03:40.593609   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.514729Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0612 15:03:40.593650   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.515422Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.23.200.184:2379"}
	I0612 15:03:40.600476   13752 logs.go:123] Gathering logs for coredns [26e5daf354e3] ...
	I0612 15:03:40.600476   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26e5daf354e3"
	I0612 15:03:40.628932   13752 command_runner.go:130] > .:53
	I0612 15:03:40.628932   13752 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 9f7dc1bade6b5769fb289c890c4bc60268e74645c2ad6eb7d326d3f775fd92cb51f1ac39274894772e6760c31275de0003978af82f0f289ef8d45827e8140e48
	I0612 15:03:40.628932   13752 command_runner.go:130] > CoreDNS-1.11.1
	I0612 15:03:40.628932   13752 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0612 15:03:40.628932   13752 command_runner.go:130] > [INFO] 127.0.0.1:54952 - 9035 "HINFO IN 225709527310201015.7757756956422223857. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.039110892s
	I0612 15:03:43.155729   13752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 15:03:43.183777   13752 command_runner.go:130] > 1830
	I0612 15:03:43.183809   13752 api_server.go:72] duration metric: took 1m7.3621211s to wait for apiserver process to appear ...
	I0612 15:03:43.183809   13752 api_server.go:88] waiting for apiserver healthz status ...
	I0612 15:03:43.192231   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0612 15:03:43.216346   13752 command_runner.go:130] > bbe2d2e51b5f
	I0612 15:03:43.217414   13752 logs.go:276] 1 containers: [bbe2d2e51b5f]
	I0612 15:03:43.226578   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0612 15:03:43.248473   13752 command_runner.go:130] > 6b61f5f6483d
	I0612 15:03:43.248529   13752 logs.go:276] 1 containers: [6b61f5f6483d]
	I0612 15:03:43.257990   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0612 15:03:43.283767   13752 command_runner.go:130] > 26e5daf354e3
	I0612 15:03:43.285218   13752 command_runner.go:130] > e83cf4eef49e
	I0612 15:03:43.285218   13752 logs.go:276] 2 containers: [26e5daf354e3 e83cf4eef49e]
	I0612 15:03:43.293981   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0612 15:03:43.318070   13752 command_runner.go:130] > 755750ecd1e3
	I0612 15:03:43.318070   13752 command_runner.go:130] > 6b021c195669
	I0612 15:03:43.318070   13752 logs.go:276] 2 containers: [755750ecd1e3 6b021c195669]
	I0612 15:03:43.328230   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0612 15:03:43.350541   13752 command_runner.go:130] > 227a905829b0
	I0612 15:03:43.350541   13752 command_runner.go:130] > c4842faba751
	I0612 15:03:43.352203   13752 logs.go:276] 2 containers: [227a905829b0 c4842faba751]
	I0612 15:03:43.361504   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0612 15:03:43.383671   13752 command_runner.go:130] > 7acc8ff0a931
	I0612 15:03:43.383671   13752 command_runner.go:130] > 685d167da53c
	I0612 15:03:43.384233   13752 logs.go:276] 2 containers: [7acc8ff0a931 685d167da53c]
	I0612 15:03:43.395335   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0612 15:03:43.417367   13752 command_runner.go:130] > cccfd1e9fef5
	I0612 15:03:43.417911   13752 command_runner.go:130] > 4d60d82f6bc5
	I0612 15:03:43.417911   13752 logs.go:276] 2 containers: [cccfd1e9fef5 4d60d82f6bc5]
	I0612 15:03:43.417911   13752 logs.go:123] Gathering logs for coredns [e83cf4eef49e] ...
	I0612 15:03:43.417911   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83cf4eef49e"
	I0612 15:03:43.448914   13752 command_runner.go:130] > .:53
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 9f7dc1bade6b5769fb289c890c4bc60268e74645c2ad6eb7d326d3f775fd92cb51f1ac39274894772e6760c31275de0003978af82f0f289ef8d45827e8140e48
	I0612 15:03:43.449013   13752 command_runner.go:130] > CoreDNS-1.11.1
	I0612 15:03:43.449013   13752 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 127.0.0.1:53490 - 39118 "HINFO IN 4677201826540465335.2322207397622737457. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.048277073s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:49256 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000267302s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:54623 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.08558s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:51804 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.048771085s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:53027 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.100151983s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.1.2:34534 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001199s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.1.2:44985 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000141701s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.1.2:54544 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.0000543s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.1.2:55517 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000123601s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:42995 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099501s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:51839 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.135718274s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:52123 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000304602s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:36740 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000274801s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:48333 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.003287018s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:55754 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000962s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:51695 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000224102s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:49605 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096301s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.1.2:37746 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000283001s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.1.2:54995 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000106501s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.1.2:49201 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077401s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.1.2:60577 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077201s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.1.2:36057 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000107301s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.1.2:43898 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.1.2:49177 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091201s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.1.2:45207 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000584s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:36676 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151001s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:60305 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000305802s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:37468 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000209201s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:34743 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125201s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.1.2:45035 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000240801s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.1.2:42306 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000309601s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.1.2:36509 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000152901s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.1.2:55614 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000545s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:39195 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130301s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:34618 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000272902s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:44444 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000177201s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:35691 - 5 "PTR IN 1.192.23.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001307s
	I0612 15:03:43.449596   13752 command_runner.go:130] > [INFO] 10.244.1.2:51174 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110501s
	I0612 15:03:43.449596   13752 command_runner.go:130] > [INFO] 10.244.1.2:41925 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000207401s
	I0612 15:03:43.449596   13752 command_runner.go:130] > [INFO] 10.244.1.2:44306 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000736s
	I0612 15:03:43.449596   13752 command_runner.go:130] > [INFO] 10.244.1.2:46158 - 5 "PTR IN 1.192.23.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000547s
	I0612 15:03:43.449651   13752 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0612 15:03:43.449651   13752 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0612 15:03:43.452683   13752 logs.go:123] Gathering logs for kube-scheduler [6b021c195669] ...
	I0612 15:03:43.452683   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b021c195669"
	I0612 15:03:43.485388   13752 command_runner.go:130] ! I0612 21:39:26.474423       1 serving.go:380] Generated self-signed cert in-memory
	I0612 15:03:43.485786   13752 command_runner.go:130] ! W0612 21:39:28.263287       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0612 15:03:43.485857   13752 command_runner.go:130] ! W0612 21:39:28.263543       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:43.485911   13752 command_runner.go:130] ! W0612 21:39:28.263706       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0612 15:03:43.485968   13752 command_runner.go:130] ! W0612 21:39:28.263849       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0612 15:03:43.485968   13752 command_runner.go:130] ! I0612 21:39:28.303051       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0612 15:03:43.485968   13752 command_runner.go:130] ! I0612 21:39:28.305840       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:43.486017   13752 command_runner.go:130] ! I0612 21:39:28.310682       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0612 15:03:43.486017   13752 command_runner.go:130] ! I0612 21:39:28.312812       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0612 15:03:43.486017   13752 command_runner.go:130] ! I0612 21:39:28.313421       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0612 15:03:43.486017   13752 command_runner.go:130] ! I0612 21:39:28.313594       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 15:03:43.486083   13752 command_runner.go:130] ! W0612 21:39:28.336905       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:43.486083   13752 command_runner.go:130] ! E0612 21:39:28.337826       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:43.486162   13752 command_runner.go:130] ! W0612 21:39:28.338227       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0612 15:03:43.486202   13752 command_runner.go:130] ! E0612 21:39:28.338391       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0612 15:03:43.486202   13752 command_runner.go:130] ! W0612 21:39:28.338652       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0612 15:03:43.486202   13752 command_runner.go:130] ! E0612 21:39:28.338896       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0612 15:03:43.486202   13752 command_runner.go:130] ! W0612 21:39:28.339195       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0612 15:03:43.486397   13752 command_runner.go:130] ! E0612 21:39:28.339406       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0612 15:03:43.486397   13752 command_runner.go:130] ! W0612 21:39:28.339694       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0612 15:03:43.486476   13752 command_runner.go:130] ! E0612 21:39:28.339892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0612 15:03:43.486558   13752 command_runner.go:130] ! W0612 21:39:28.340188       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0612 15:03:43.486596   13752 command_runner.go:130] ! E0612 21:39:28.340362       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0612 15:03:43.486648   13752 command_runner.go:130] ! W0612 21:39:28.340697       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:43.486719   13752 command_runner.go:130] ! E0612 21:39:28.341129       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:43.486766   13752 command_runner.go:130] ! W0612 21:39:28.341447       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:43.486805   13752 command_runner.go:130] ! E0612 21:39:28.341664       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:43.486805   13752 command_runner.go:130] ! W0612 21:39:28.341989       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0612 15:03:43.486853   13752 command_runner.go:130] ! E0612 21:39:28.342229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0612 15:03:43.486907   13752 command_runner.go:130] ! W0612 21:39:28.342540       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:43.487016   13752 command_runner.go:130] ! E0612 21:39:28.344839       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:43.487133   13752 command_runner.go:130] ! W0612 21:39:28.345316       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0612 15:03:43.487186   13752 command_runner.go:130] ! E0612 21:39:28.347872       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0612 15:03:43.487261   13752 command_runner.go:130] ! W0612 21:39:28.345596       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! W0612 21:39:28.345651       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! W0612 21:39:28.345691       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! W0612 21:39:28.345823       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! E0612 21:39:28.348490       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! E0612 21:39:28.348742       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! E0612 21:39:28.349066       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! E0612 21:39:28.349147       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! W0612 21:39:29.192073       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! E0612 21:39:29.192126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! W0612 21:39:29.249000       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! E0612 21:39:29.249248       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! W0612 21:39:29.268880       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! E0612 21:39:29.268972       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! W0612 21:39:29.271696       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! E0612 21:39:29.271839       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! W0612 21:39:29.275489       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! E0612 21:39:29.275551       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! W0612 21:39:29.296739       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! E0612 21:39:29.297145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:43.487892   13752 command_runner.go:130] ! W0612 21:39:29.433593       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0612 15:03:43.487892   13752 command_runner.go:130] ! E0612 21:39:29.433887       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0612 15:03:43.487967   13752 command_runner.go:130] ! W0612 21:39:29.471880       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0612 15:03:43.487967   13752 command_runner.go:130] ! E0612 21:39:29.471994       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0612 15:03:43.488066   13752 command_runner.go:130] ! W0612 21:39:29.482669       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:43.488105   13752 command_runner.go:130] ! E0612 21:39:29.483008       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:43.488105   13752 command_runner.go:130] ! W0612 21:39:29.569402       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0612 15:03:43.488105   13752 command_runner.go:130] ! E0612 21:39:29.571433       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0612 15:03:43.488172   13752 command_runner.go:130] ! W0612 21:39:29.677906       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0612 15:03:43.488252   13752 command_runner.go:130] ! E0612 21:39:29.677950       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0612 15:03:43.488314   13752 command_runner.go:130] ! W0612 21:39:29.687951       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0612 15:03:43.488353   13752 command_runner.go:130] ! E0612 21:39:29.688054       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0612 15:03:43.488353   13752 command_runner.go:130] ! W0612 21:39:29.780288       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0612 15:03:43.488353   13752 command_runner.go:130] ! E0612 21:39:29.780411       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0612 15:03:43.488353   13752 command_runner.go:130] ! W0612 21:39:29.832564       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0612 15:03:43.488353   13752 command_runner.go:130] ! E0612 21:39:29.832892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0612 15:03:43.488353   13752 command_runner.go:130] ! W0612 21:39:29.889591       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:43.488353   13752 command_runner.go:130] ! E0612 21:39:29.889868       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:43.488353   13752 command_runner.go:130] ! I0612 21:39:32.513980       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0612 15:03:43.488353   13752 command_runner.go:130] ! E0612 22:00:01.172050       1 run.go:74] "command failed" err="finished without leader elect"
	I0612 15:03:43.500094   13752 logs.go:123] Gathering logs for kube-controller-manager [7acc8ff0a931] ...
	I0612 15:03:43.500094   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7acc8ff0a931"
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:28.579013       1 serving.go:380] Generated self-signed cert in-memory
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:28.927149       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:28.927184       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:28.930688       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:28.932993       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:28.933167       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:28.933539       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:32.987820       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:32.988653       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:32.994458       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:32.995780       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:32.996873       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:33.005703       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:33.005720       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:33.006099       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:33.006120       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:33.011328       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:33.013199       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:33.013216       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0612 15:03:43.531221   13752 command_runner.go:130] ! W0612 22:02:33.045760       1 shared_informer.go:597] resyncPeriod 19h21m1.650821539s is smaller than resyncCheckPeriod 23h18m38.368150047s and the informer has already started. Changing it to 23h18m38.368150047s
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:33.046400       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:33.046742       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:33.047003       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:33.047066       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:33.047091       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0612 15:03:43.531770   13752 command_runner.go:130] ! I0612 22:02:33.047150       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0612 15:03:43.531770   13752 command_runner.go:130] ! I0612 22:02:33.047175       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0612 15:03:43.531865   13752 command_runner.go:130] ! I0612 22:02:33.047875       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0612 15:03:43.531962   13752 command_runner.go:130] ! I0612 22:02:33.048961       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0612 15:03:43.531962   13752 command_runner.go:130] ! I0612 22:02:33.049070       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0612 15:03:43.532048   13752 command_runner.go:130] ! I0612 22:02:33.049108       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0612 15:03:43.532048   13752 command_runner.go:130] ! I0612 22:02:33.049132       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0612 15:03:43.532075   13752 command_runner.go:130] ! I0612 22:02:33.049173       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0612 15:03:43.532119   13752 command_runner.go:130] ! I0612 22:02:33.049188       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0612 15:03:43.532119   13752 command_runner.go:130] ! I0612 22:02:33.049203       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0612 15:03:43.532163   13752 command_runner.go:130] ! I0612 22:02:33.049218       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.049235       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.049307       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! W0612 22:02:33.049318       1 shared_informer.go:597] resyncPeriod 16h27m54.164006095s is smaller than resyncCheckPeriod 23h18m38.368150047s and the informer has already started. Changing it to 23h18m38.368150047s
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.049536       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.049616       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.049652       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.049852       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.049880       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.052188       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.075270       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.088124       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.088224       1 shared_informer.go:313] Waiting for caches to sync for job
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.088312       1 shared_informer.go:320] Caches are synced for tokens
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.092469       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.093016       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.093183       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.099173       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.099288       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.099302       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.099269       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.099467       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.102279       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.103692       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.103797       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.109335       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.109737       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.109801       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.109811       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.113018       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.114442       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.114573       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.118932       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.118955       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.118979       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.119791       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0612 15:03:43.532715   13752 command_runner.go:130] ! I0612 22:02:33.121411       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0612 15:03:43.532715   13752 command_runner.go:130] ! I0612 22:02:33.119985       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:43.532715   13752 command_runner.go:130] ! I0612 22:02:33.122332       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0612 15:03:43.532767   13752 command_runner.go:130] ! I0612 22:02:33.122409       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0612 15:03:43.532767   13752 command_runner.go:130] ! I0612 22:02:33.122432       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:43.532817   13752 command_runner.go:130] ! I0612 22:02:33.122572       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0612 15:03:43.532848   13752 command_runner.go:130] ! I0612 22:02:33.122710       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0612 15:03:43.532848   13752 command_runner.go:130] ! I0612 22:02:33.122722       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0612 15:03:43.532881   13752 command_runner.go:130] ! I0612 22:02:33.122748       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:43.532911   13752 command_runner.go:130] ! I0612 22:02:33.132412       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0612 15:03:43.532911   13752 command_runner.go:130] ! I0612 22:02:33.132517       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0612 15:03:43.532939   13752 command_runner.go:130] ! I0612 22:02:33.132620       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0612 15:03:43.532939   13752 command_runner.go:130] ! I0612 22:02:33.132660       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0612 15:03:43.532939   13752 command_runner.go:130] ! I0612 22:02:33.132669       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0612 15:03:43.532983   13752 command_runner.go:130] ! I0612 22:02:33.139478       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0612 15:03:43.532983   13752 command_runner.go:130] ! I0612 22:02:33.139854       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0612 15:03:43.532983   13752 command_runner.go:130] ! I0612 22:02:33.140261       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0612 15:03:43.533021   13752 command_runner.go:130] ! I0612 22:02:33.169621       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0612 15:03:43.533021   13752 command_runner.go:130] ! I0612 22:02:33.169819       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0612 15:03:43.533058   13752 command_runner.go:130] ! I0612 22:02:33.169849       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0612 15:03:43.533058   13752 command_runner.go:130] ! I0612 22:02:33.170074       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0612 15:03:43.533093   13752 command_runner.go:130] ! I0612 22:02:33.173816       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0612 15:03:43.533093   13752 command_runner.go:130] ! I0612 22:02:33.174120       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0612 15:03:43.533135   13752 command_runner.go:130] ! I0612 22:02:33.174130       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0612 15:03:43.533135   13752 command_runner.go:130] ! I0612 22:02:33.184678       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0612 15:03:43.533135   13752 command_runner.go:130] ! I0612 22:02:33.186030       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0612 15:03:43.533172   13752 command_runner.go:130] ! I0612 22:02:33.192152       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0612 15:03:43.533209   13752 command_runner.go:130] ! I0612 22:02:33.192257       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0612 15:03:43.533244   13752 command_runner.go:130] ! I0612 22:02:33.192268       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0612 15:03:43.533244   13752 command_runner.go:130] ! I0612 22:02:33.194361       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0612 15:03:43.533244   13752 command_runner.go:130] ! I0612 22:02:33.194659       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0612 15:03:43.533288   13752 command_runner.go:130] ! I0612 22:02:33.194671       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0612 15:03:43.533288   13752 command_runner.go:130] ! I0612 22:02:33.200378       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0612 15:03:43.533324   13752 command_runner.go:130] ! I0612 22:02:33.200552       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0612 15:03:43.533324   13752 command_runner.go:130] ! I0612 22:02:33.200579       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0612 15:03:43.533361   13752 command_runner.go:130] ! I0612 22:02:33.203400       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0612 15:03:43.533361   13752 command_runner.go:130] ! I0612 22:02:33.203797       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0612 15:03:43.533361   13752 command_runner.go:130] ! I0612 22:02:33.203967       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0612 15:03:43.533396   13752 command_runner.go:130] ! I0612 22:02:33.207566       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0612 15:03:43.533396   13752 command_runner.go:130] ! I0612 22:02:33.207732       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0612 15:03:43.533396   13752 command_runner.go:130] ! I0612 22:02:33.207743       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0612 15:03:43.533458   13752 command_runner.go:130] ! I0612 22:02:33.207766       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0612 15:03:43.533458   13752 command_runner.go:130] ! I0612 22:02:33.214389       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0612 15:03:43.533498   13752 command_runner.go:130] ! I0612 22:02:33.214572       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0612 15:03:43.533498   13752 command_runner.go:130] ! I0612 22:02:33.214655       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0612 15:03:43.533498   13752 command_runner.go:130] ! I0612 22:02:33.220603       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0612 15:03:43.533548   13752 command_runner.go:130] ! I0612 22:02:33.221181       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0612 15:03:43.533548   13752 command_runner.go:130] ! I0612 22:02:33.222958       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0612 15:03:43.533548   13752 command_runner.go:130] ! E0612 22:02:33.228603       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0612 15:03:43.533596   13752 command_runner.go:130] ! I0612 22:02:33.228994       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0612 15:03:43.533596   13752 command_runner.go:130] ! I0612 22:02:33.253059       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0612 15:03:43.533650   13752 command_runner.go:130] ! I0612 22:02:33.253281       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0612 15:03:43.533685   13752 command_runner.go:130] ! I0612 22:02:33.253292       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0612 15:03:43.533685   13752 command_runner.go:130] ! I0612 22:02:33.264081       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0612 15:03:43.533685   13752 command_runner.go:130] ! I0612 22:02:33.266480       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0612 15:03:43.533726   13752 command_runner.go:130] ! I0612 22:02:33.266606       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0612 15:03:43.533766   13752 command_runner.go:130] ! I0612 22:02:33.266742       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0612 15:03:43.533766   13752 command_runner.go:130] ! I0612 22:02:33.380173       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0612 15:03:43.533766   13752 command_runner.go:130] ! I0612 22:02:33.380458       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0612 15:03:43.533808   13752 command_runner.go:130] ! I0612 22:02:33.380796       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0612 15:03:43.533808   13752 command_runner.go:130] ! I0612 22:02:33.398346       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0612 15:03:43.533847   13752 command_runner.go:130] ! I0612 22:02:33.401718       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0612 15:03:43.533847   13752 command_runner.go:130] ! I0612 22:02:33.401737       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0612 15:03:43.533883   13752 command_runner.go:130] ! I0612 22:02:33.495874       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0612 15:03:43.533883   13752 command_runner.go:130] ! I0612 22:02:33.496386       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0612 15:03:43.533922   13752 command_runner.go:130] ! I0612 22:02:33.498064       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0612 15:03:43.533957   13752 command_runner.go:130] ! I0612 22:02:33.698817       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0612 15:03:43.533957   13752 command_runner.go:130] ! I0612 22:02:33.699215       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0612 15:03:43.533957   13752 command_runner.go:130] ! I0612 22:02:33.699646       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0612 15:03:43.533997   13752 command_runner.go:130] ! I0612 22:02:33.744449       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0612 15:03:43.534054   13752 command_runner.go:130] ! I0612 22:02:33.744531       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0612 15:03:43.534054   13752 command_runner.go:130] ! I0612 22:02:33.744546       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0612 15:03:43.534054   13752 command_runner.go:130] ! E0612 22:02:33.807267       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0612 15:03:43.534092   13752 command_runner.go:130] ! I0612 22:02:33.807295       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0612 15:03:43.534126   13752 command_runner.go:130] ! I0612 22:02:33.856639       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0612 15:03:43.534126   13752 command_runner.go:130] ! I0612 22:02:33.857088       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0612 15:03:43.534164   13752 command_runner.go:130] ! I0612 22:02:33.857273       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0612 15:03:43.534164   13752 command_runner.go:130] ! I0612 22:02:33.894016       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0612 15:03:43.534198   13752 command_runner.go:130] ! I0612 22:02:33.896048       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0612 15:03:43.534198   13752 command_runner.go:130] ! I0612 22:02:33.896083       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0612 15:03:43.534236   13752 command_runner.go:130] ! I0612 22:02:33.950707       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0612 15:03:43.534236   13752 command_runner.go:130] ! I0612 22:02:33.950731       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0612 15:03:43.534270   13752 command_runner.go:130] ! I0612 22:02:33.950771       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0612 15:03:43.534308   13752 command_runner.go:130] ! I0612 22:02:33.950821       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0612 15:03:43.534308   13752 command_runner.go:130] ! I0612 22:02:33.950870       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0612 15:03:43.534342   13752 command_runner.go:130] ! I0612 22:02:33.995005       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0612 15:03:43.534342   13752 command_runner.go:130] ! I0612 22:02:33.995247       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0612 15:03:43.534380   13752 command_runner.go:130] ! I0612 22:02:44.062766       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0612 15:03:43.534380   13752 command_runner.go:130] ! I0612 22:02:44.063067       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0612 15:03:43.534419   13752 command_runner.go:130] ! I0612 22:02:44.063362       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0612 15:03:43.534419   13752 command_runner.go:130] ! I0612 22:02:44.063411       1 shared_informer.go:313] Waiting for caches to sync for node
	I0612 15:03:43.534457   13752 command_runner.go:130] ! I0612 22:02:44.068203       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0612 15:03:43.534457   13752 command_runner.go:130] ! I0612 22:02:44.068603       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0612 15:03:43.534457   13752 command_runner.go:130] ! I0612 22:02:44.068777       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0612 15:03:43.534497   13752 command_runner.go:130] ! I0612 22:02:44.071309       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0612 15:03:43.534497   13752 command_runner.go:130] ! I0612 22:02:44.071638       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0612 15:03:43.534535   13752 command_runner.go:130] ! I0612 22:02:44.071795       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0612 15:03:43.534535   13752 command_runner.go:130] ! I0612 22:02:44.080804       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0612 15:03:43.534575   13752 command_runner.go:130] ! I0612 22:02:44.097810       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:43.534575   13752 command_runner.go:130] ! I0612 22:02:44.100018       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0612 15:03:43.534575   13752 command_runner.go:130] ! I0612 22:02:44.100030       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0612 15:03:43.534613   13752 command_runner.go:130] ! I0612 22:02:44.102193       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000\" does not exist"
	I0612 15:03:43.534648   13752 command_runner.go:130] ! I0612 22:02:44.102337       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m02\" does not exist"
	I0612 15:03:43.534686   13752 command_runner.go:130] ! I0612 22:02:44.102640       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:43.534686   13752 command_runner.go:130] ! I0612 22:02:44.102796       1 shared_informer.go:320] Caches are synced for TTL
	I0612 15:03:43.534720   13752 command_runner.go:130] ! I0612 22:02:44.102925       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m03\" does not exist"
	I0612 15:03:43.534758   13752 command_runner.go:130] ! I0612 22:02:44.102986       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:43.534758   13752 command_runner.go:130] ! I0612 22:02:44.113771       1 shared_informer.go:320] Caches are synced for GC
	I0612 15:03:43.534758   13752 command_runner.go:130] ! I0612 22:02:44.115010       1 shared_informer.go:320] Caches are synced for endpoint
	I0612 15:03:43.534792   13752 command_runner.go:130] ! I0612 22:02:44.115463       1 shared_informer.go:320] Caches are synced for cronjob
	I0612 15:03:43.534830   13752 command_runner.go:130] ! I0612 22:02:44.119062       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0612 15:03:43.534830   13752 command_runner.go:130] ! I0612 22:02:44.121259       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0612 15:03:43.534830   13752 command_runner.go:130] ! I0612 22:02:44.124526       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0612 15:03:43.534830   13752 command_runner.go:130] ! I0612 22:02:44.124650       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0612 15:03:43.534870   13752 command_runner.go:130] ! I0612 22:02:44.124971       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0612 15:03:43.534908   13752 command_runner.go:130] ! I0612 22:02:44.126246       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0612 15:03:43.534908   13752 command_runner.go:130] ! I0612 22:02:44.133682       1 shared_informer.go:320] Caches are synced for taint
	I0612 15:03:43.534908   13752 command_runner.go:130] ! I0612 22:02:44.134026       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0612 15:03:43.534942   13752 command_runner.go:130] ! I0612 22:02:44.141044       1 shared_informer.go:320] Caches are synced for service account
	I0612 15:03:43.534942   13752 command_runner.go:130] ! I0612 22:02:44.145563       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0612 15:03:43.534980   13752 command_runner.go:130] ! I0612 22:02:44.158513       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0612 15:03:43.535015   13752 command_runner.go:130] ! I0612 22:02:44.162319       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000"
	I0612 15:03:43.535015   13752 command_runner.go:130] ! I0612 22:02:44.162613       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000-m02"
	I0612 15:03:43.535053   13752 command_runner.go:130] ! I0612 22:02:44.162653       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000-m03"
	I0612 15:03:43.535053   13752 command_runner.go:130] ! I0612 22:02:44.163186       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0612 15:03:43.535087   13752 command_runner.go:130] ! I0612 22:02:44.164074       1 shared_informer.go:320] Caches are synced for node
	I0612 15:03:43.535087   13752 command_runner.go:130] ! I0612 22:02:44.164451       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0612 15:03:43.535125   13752 command_runner.go:130] ! I0612 22:02:44.164672       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0612 15:03:43.535125   13752 command_runner.go:130] ! I0612 22:02:44.164769       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0612 15:03:43.535165   13752 command_runner.go:130] ! I0612 22:02:44.164780       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0612 15:03:43.535165   13752 command_runner.go:130] ! I0612 22:02:44.167842       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0612 15:03:43.535165   13752 command_runner.go:130] ! I0612 22:02:44.174384       1 shared_informer.go:320] Caches are synced for daemon sets
	I0612 15:03:43.535202   13752 command_runner.go:130] ! I0612 22:02:44.182521       1 shared_informer.go:320] Caches are synced for namespace
	I0612 15:03:43.535236   13752 command_runner.go:130] ! I0612 22:02:44.186460       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0612 15:03:43.535236   13752 command_runner.go:130] ! I0612 22:02:44.194992       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0612 15:03:43.535275   13752 command_runner.go:130] ! I0612 22:02:44.196327       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0612 15:03:43.535275   13752 command_runner.go:130] ! I0612 22:02:44.196530       1 shared_informer.go:320] Caches are synced for job
	I0612 15:03:43.535275   13752 command_runner.go:130] ! I0612 22:02:44.196665       1 shared_informer.go:320] Caches are synced for deployment
	I0612 15:03:43.535315   13752 command_runner.go:130] ! I0612 22:02:44.200768       1 shared_informer.go:320] Caches are synced for HPA
	I0612 15:03:43.535315   13752 command_runner.go:130] ! I0612 22:02:44.200988       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0612 15:03:43.535315   13752 command_runner.go:130] ! I0612 22:02:44.201846       1 shared_informer.go:320] Caches are synced for PV protection
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.207493       1 shared_informer.go:320] Caches are synced for crt configmap
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.228051       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="25.792655ms"
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.231633       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="89.306µs"
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.244808       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.644732ms"
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.246402       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.002µs"
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.297636       1 shared_informer.go:320] Caches are synced for PVC protection
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.304265       1 shared_informer.go:320] Caches are synced for stateful set
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.304486       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.311023       1 shared_informer.go:320] Caches are synced for disruption
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.350865       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.351039       1 shared_informer.go:320] Caches are synced for ephemeral
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.353535       1 shared_informer.go:320] Caches are synced for persistent volume
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.369296       1 shared_informer.go:320] Caches are synced for attach detach
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.372273       1 shared_informer.go:320] Caches are synced for expand
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.381442       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.821842       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.870923       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.871005       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:03:11.878868       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:03:24.254264       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.921834ms"
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:03:24.256639       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.601µs"
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:03:37.832133       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="82.001µs"
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:03:37.905221       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.518825ms"
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:03:37.905853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.201µs"
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:03:37.917312       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.821108ms"
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:03:37.917472       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.3µs"
	I0612 15:03:43.553154   13752 logs.go:123] Gathering logs for kube-controller-manager [685d167da53c] ...
	I0612 15:03:43.553154   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 685d167da53c"
	I0612 15:03:43.577027   13752 command_runner.go:130] ! I0612 21:39:26.275086       1 serving.go:380] Generated self-signed cert in-memory
	I0612 15:03:43.577027   13752 command_runner.go:130] ! I0612 21:39:26.758419       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0612 15:03:43.577027   13752 command_runner.go:130] ! I0612 21:39:26.759036       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:43.577027   13752 command_runner.go:130] ! I0612 21:39:26.761311       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0612 15:03:43.577027   13752 command_runner.go:130] ! I0612 21:39:26.761663       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0612 15:03:43.577027   13752 command_runner.go:130] ! I0612 21:39:26.762454       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0612 15:03:43.577027   13752 command_runner.go:130] ! I0612 21:39:26.762652       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 15:03:43.581906   13752 command_runner.go:130] ! I0612 21:39:31.260969       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0612 15:03:43.581906   13752 command_runner.go:130] ! I0612 21:39:31.261096       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0612 15:03:43.581906   13752 command_runner.go:130] ! E0612 21:39:31.316508       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0612 15:03:43.581906   13752 command_runner.go:130] ! I0612 21:39:31.316587       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0612 15:03:43.581906   13752 command_runner.go:130] ! I0612 21:39:31.342032       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0612 15:03:43.581906   13752 command_runner.go:130] ! I0612 21:39:31.342287       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0612 15:03:43.581906   13752 command_runner.go:130] ! I0612 21:39:31.342304       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0612 15:03:43.582044   13752 command_runner.go:130] ! I0612 21:39:31.362243       1 shared_informer.go:320] Caches are synced for tokens
	I0612 15:03:43.582044   13752 command_runner.go:130] ! I0612 21:39:31.399024       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0612 15:03:43.582044   13752 command_runner.go:130] ! I0612 21:39:31.399081       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0612 15:03:43.582044   13752 command_runner.go:130] ! I0612 21:39:31.399264       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0612 15:03:43.582044   13752 command_runner.go:130] ! I0612 21:39:31.443376       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0612 15:03:43.582044   13752 command_runner.go:130] ! I0612 21:39:31.443603       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0612 15:03:43.582044   13752 command_runner.go:130] ! I0612 21:39:31.443617       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0612 15:03:43.582044   13752 command_runner.go:130] ! I0612 21:39:31.480477       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0612 15:03:43.582044   13752 command_runner.go:130] ! I0612 21:39:31.480993       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0612 15:03:43.582174   13752 command_runner.go:130] ! I0612 21:39:31.481007       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0612 15:03:43.582174   13752 command_runner.go:130] ! I0612 21:39:31.523943       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0612 15:03:43.582174   13752 command_runner.go:130] ! I0612 21:39:31.524182       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0612 15:03:43.582174   13752 command_runner.go:130] ! I0612 21:39:31.524535       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0612 15:03:43.582174   13752 command_runner.go:130] ! I0612 21:39:31.524741       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0612 15:03:43.582174   13752 command_runner.go:130] ! I0612 21:39:31.553194       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0612 15:03:43.582174   13752 command_runner.go:130] ! I0612 21:39:31.554412       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0612 15:03:43.582174   13752 command_runner.go:130] ! I0612 21:39:31.556852       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0612 15:03:43.582174   13752 command_runner.go:130] ! I0612 21:39:31.560273       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0612 15:03:43.582285   13752 command_runner.go:130] ! I0612 21:39:31.560448       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0612 15:03:43.582285   13752 command_runner.go:130] ! I0612 21:39:31.561614       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0612 15:03:43.582285   13752 command_runner.go:130] ! I0612 21:39:31.561933       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0612 15:03:43.582285   13752 command_runner.go:130] ! I0612 21:39:31.593308       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0612 15:03:43.582285   13752 command_runner.go:130] ! I0612 21:39:31.593438       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0612 15:03:43.582285   13752 command_runner.go:130] ! I0612 21:39:31.593459       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0612 15:03:43.582285   13752 command_runner.go:130] ! I0612 21:39:31.593488       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0612 15:03:43.582406   13752 command_runner.go:130] ! I0612 21:39:31.593534       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0612 15:03:43.582406   13752 command_runner.go:130] ! I0612 21:39:31.593588       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0612 15:03:43.582406   13752 command_runner.go:130] ! I0612 21:39:31.593611       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0612 15:03:43.582406   13752 command_runner.go:130] ! I0612 21:39:31.593650       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0612 15:03:43.582530   13752 command_runner.go:130] ! I0612 21:39:31.593684       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0612 15:03:43.582530   13752 command_runner.go:130] ! I0612 21:39:31.593701       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0612 15:03:43.582594   13752 command_runner.go:130] ! I0612 21:39:31.593721       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0612 15:03:43.582594   13752 command_runner.go:130] ! I0612 21:39:31.593739       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0612 15:03:43.582636   13752 command_runner.go:130] ! I0612 21:39:31.593950       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0612 15:03:43.582636   13752 command_runner.go:130] ! I0612 21:39:31.594051       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0612 15:03:43.582636   13752 command_runner.go:130] ! I0612 21:39:31.594202       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0612 15:03:43.582636   13752 command_runner.go:130] ! I0612 21:39:31.594262       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0612 15:03:43.582636   13752 command_runner.go:130] ! I0612 21:39:31.594286       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0612 15:03:43.582636   13752 command_runner.go:130] ! I0612 21:39:31.594306       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0612 15:03:43.582737   13752 command_runner.go:130] ! I0612 21:39:31.594500       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0612 15:03:43.582737   13752 command_runner.go:130] ! I0612 21:39:31.594602       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0612 15:03:43.582737   13752 command_runner.go:130] ! I0612 21:39:31.594857       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0612 15:03:43.582737   13752 command_runner.go:130] ! I0612 21:39:31.594957       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0612 15:03:43.582737   13752 command_runner.go:130] ! I0612 21:39:31.595276       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0612 15:03:43.582737   13752 command_runner.go:130] ! I0612 21:39:31.595463       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0612 15:03:43.582737   13752 command_runner.go:130] ! I0612 21:39:31.605247       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0612 15:03:43.582879   13752 command_runner.go:130] ! I0612 21:39:31.605722       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0612 15:03:43.582879   13752 command_runner.go:130] ! I0612 21:39:31.607199       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0612 15:03:43.582879   13752 command_runner.go:130] ! I0612 21:39:31.668704       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0612 15:03:43.582879   13752 command_runner.go:130] ! I0612 21:39:31.669329       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0612 15:03:43.582879   13752 command_runner.go:130] ! I0612 21:39:31.669521       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0612 15:03:43.582952   13752 command_runner.go:130] ! I0612 21:39:31.820968       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0612 15:03:43.582952   13752 command_runner.go:130] ! I0612 21:39:31.821104       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0612 15:03:43.582952   13752 command_runner.go:130] ! I0612 21:39:31.821117       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0612 15:03:43.582952   13752 command_runner.go:130] ! I0612 21:39:31.973500       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0612 15:03:43.583035   13752 command_runner.go:130] ! I0612 21:39:31.973543       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0612 15:03:43.583035   13752 command_runner.go:130] ! I0612 21:39:31.975344       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0612 15:03:43.583035   13752 command_runner.go:130] ! I0612 21:39:31.975377       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0612 15:03:43.583035   13752 command_runner.go:130] ! I0612 21:39:32.163715       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0612 15:03:43.583035   13752 command_runner.go:130] ! I0612 21:39:32.163860       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0612 15:03:43.583035   13752 command_runner.go:130] ! I0612 21:39:32.320380       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0612 15:03:43.583035   13752 command_runner.go:130] ! I0612 21:39:32.320516       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0612 15:03:43.583141   13752 command_runner.go:130] ! I0612 21:39:32.320529       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0612 15:03:43.583141   13752 command_runner.go:130] ! I0612 21:39:32.468817       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0612 15:03:43.583141   13752 command_runner.go:130] ! I0612 21:39:32.468893       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0612 15:03:43.583141   13752 command_runner.go:130] ! I0612 21:39:32.636144       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0612 15:03:43.583141   13752 command_runner.go:130] ! I0612 21:39:32.636921       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0612 15:03:43.583141   13752 command_runner.go:130] ! I0612 21:39:32.637331       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0612 15:03:43.583141   13752 command_runner.go:130] ! I0612 21:39:32.775300       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0612 15:03:43.583141   13752 command_runner.go:130] ! I0612 21:39:32.776007       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0612 15:03:43.583141   13752 command_runner.go:130] ! I0612 21:39:32.778803       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0612 15:03:43.583141   13752 command_runner.go:130] ! I0612 21:39:32.920254       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0612 15:03:43.583252   13752 command_runner.go:130] ! I0612 21:39:32.920359       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0612 15:03:43.583252   13752 command_runner.go:130] ! I0612 21:39:32.920902       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0612 15:03:43.583252   13752 command_runner.go:130] ! I0612 21:39:33.069533       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0612 15:03:43.583252   13752 command_runner.go:130] ! I0612 21:39:33.069689       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0612 15:03:43.583252   13752 command_runner.go:130] ! I0612 21:39:33.069704       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0612 15:03:43.583252   13752 command_runner.go:130] ! I0612 21:39:33.069713       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0612 15:03:43.583252   13752 command_runner.go:130] ! I0612 21:39:33.115693       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0612 15:03:43.583414   13752 command_runner.go:130] ! I0612 21:39:33.115796       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0612 15:03:43.583414   13752 command_runner.go:130] ! I0612 21:39:33.115809       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0612 15:03:43.583414   13752 command_runner.go:130] ! I0612 21:39:33.116021       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0612 15:03:43.583414   13752 command_runner.go:130] ! I0612 21:39:33.116257       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0612 15:03:43.583414   13752 command_runner.go:130] ! I0612 21:39:33.116416       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0612 15:03:43.583414   13752 command_runner.go:130] ! I0612 21:39:33.169481       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0612 15:03:43.583414   13752 command_runner.go:130] ! I0612 21:39:33.169523       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0612 15:03:43.583529   13752 command_runner.go:130] ! I0612 21:39:33.169561       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:43.583529   13752 command_runner.go:130] ! I0612 21:39:33.170619       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0612 15:03:43.583529   13752 command_runner.go:130] ! I0612 21:39:33.170693       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0612 15:03:43.583529   13752 command_runner.go:130] ! I0612 21:39:33.170745       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:43.583529   13752 command_runner.go:130] ! I0612 21:39:33.171426       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0612 15:03:43.583529   13752 command_runner.go:130] ! I0612 21:39:33.171458       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0612 15:03:43.583529   13752 command_runner.go:130] ! I0612 21:39:33.171479       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:43.583645   13752 command_runner.go:130] ! I0612 21:39:33.172032       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0612 15:03:43.583645   13752 command_runner.go:130] ! I0612 21:39:33.172160       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0612 15:03:43.583645   13752 command_runner.go:130] ! I0612 21:39:33.172352       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0612 15:03:43.583645   13752 command_runner.go:130] ! I0612 21:39:33.172295       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:43.583768   13752 command_runner.go:130] ! I0612 21:39:43.229790       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0612 15:03:43.583768   13752 command_runner.go:130] ! I0612 21:39:43.230104       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0612 15:03:43.583768   13752 command_runner.go:130] ! I0612 21:39:43.230715       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0612 15:03:43.583768   13752 command_runner.go:130] ! I0612 21:39:43.230868       1 shared_informer.go:313] Waiting for caches to sync for node
	I0612 15:03:43.583832   13752 command_runner.go:130] ! E0612 21:39:43.246433       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0612 15:03:43.583832   13752 command_runner.go:130] ! I0612 21:39:43.246740       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0612 15:03:43.583832   13752 command_runner.go:130] ! I0612 21:39:43.246878       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0612 15:03:43.583832   13752 command_runner.go:130] ! I0612 21:39:43.247178       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0612 15:03:43.583832   13752 command_runner.go:130] ! I0612 21:39:43.259694       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0612 15:03:43.583923   13752 command_runner.go:130] ! I0612 21:39:43.260105       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0612 15:03:43.583923   13752 command_runner.go:130] ! I0612 21:39:43.260326       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0612 15:03:43.583923   13752 command_runner.go:130] ! I0612 21:39:43.287038       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0612 15:03:43.583923   13752 command_runner.go:130] ! I0612 21:39:43.287747       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0612 15:03:43.583923   13752 command_runner.go:130] ! I0612 21:39:43.289545       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0612 15:03:43.583923   13752 command_runner.go:130] ! I0612 21:39:43.296881       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0612 15:03:43.584025   13752 command_runner.go:130] ! I0612 21:39:43.297485       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0612 15:03:43.584025   13752 command_runner.go:130] ! I0612 21:39:43.297679       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0612 15:03:43.584025   13752 command_runner.go:130] ! I0612 21:39:43.315673       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0612 15:03:43.584025   13752 command_runner.go:130] ! I0612 21:39:43.316362       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0612 15:03:43.584025   13752 command_runner.go:130] ! I0612 21:39:43.316724       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0612 15:03:43.584025   13752 command_runner.go:130] ! I0612 21:39:43.331329       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0612 15:03:43.584025   13752 command_runner.go:130] ! I0612 21:39:43.331610       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0612 15:03:43.584025   13752 command_runner.go:130] ! I0612 21:39:43.331966       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0612 15:03:43.584139   13752 command_runner.go:130] ! I0612 21:39:43.358081       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0612 15:03:43.584139   13752 command_runner.go:130] ! I0612 21:39:43.358485       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0612 15:03:43.584139   13752 command_runner.go:130] ! I0612 21:39:43.358595       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0612 15:03:43.584139   13752 command_runner.go:130] ! I0612 21:39:43.358609       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0612 15:03:43.584139   13752 command_runner.go:130] ! I0612 21:39:43.373221       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0612 15:03:43.584139   13752 command_runner.go:130] ! I0612 21:39:43.373371       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0612 15:03:43.584139   13752 command_runner.go:130] ! I0612 21:39:43.373388       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0612 15:03:43.584139   13752 command_runner.go:130] ! I0612 21:39:43.386049       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0612 15:03:43.584139   13752 command_runner.go:130] ! I0612 21:39:43.386265       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0612 15:03:43.584264   13752 command_runner.go:130] ! I0612 21:39:43.387457       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0612 15:03:43.584264   13752 command_runner.go:130] ! I0612 21:39:43.473855       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0612 15:03:43.584264   13752 command_runner.go:130] ! I0612 21:39:43.474115       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0612 15:03:43.584264   13752 command_runner.go:130] ! I0612 21:39:43.474421       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0612 15:03:43.584264   13752 command_runner.go:130] ! I0612 21:39:43.622457       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0612 15:03:43.584264   13752 command_runner.go:130] ! I0612 21:39:43.622831       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0612 15:03:43.584264   13752 command_runner.go:130] ! I0612 21:39:43.622950       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0612 15:03:43.584378   13752 command_runner.go:130] ! I0612 21:39:43.776632       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0612 15:03:43.584378   13752 command_runner.go:130] ! I0612 21:39:43.777149       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0612 15:03:43.584378   13752 command_runner.go:130] ! I0612 21:39:43.777203       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0612 15:03:43.584378   13752 command_runner.go:130] ! I0612 21:39:43.923199       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0612 15:03:43.584431   13752 command_runner.go:130] ! I0612 21:39:43.923416       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0612 15:03:43.584431   13752 command_runner.go:130] ! I0612 21:39:43.923557       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0612 15:03:43.584431   13752 command_runner.go:130] ! I0612 21:39:44.219008       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0612 15:03:43.584431   13752 command_runner.go:130] ! I0612 21:39:44.219041       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0612 15:03:43.584431   13752 command_runner.go:130] ! I0612 21:39:44.219093       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0612 15:03:43.584431   13752 command_runner.go:130] ! I0612 21:39:44.219104       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0612 15:03:43.584431   13752 command_runner.go:130] ! I0612 21:39:44.375322       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0612 15:03:43.584431   13752 command_runner.go:130] ! I0612 21:39:44.375879       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0612 15:03:43.584533   13752 command_runner.go:130] ! I0612 21:39:44.375896       1 shared_informer.go:313] Waiting for caches to sync for job
	I0612 15:03:43.584533   13752 command_runner.go:130] ! I0612 21:39:44.419335       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0612 15:03:43.584533   13752 command_runner.go:130] ! I0612 21:39:44.419357       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0612 15:03:43.584533   13752 command_runner.go:130] ! I0612 21:39:44.419672       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0612 15:03:43.584596   13752 command_runner.go:130] ! I0612 21:39:44.435364       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0612 15:03:43.584596   13752 command_runner.go:130] ! I0612 21:39:44.441191       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000\" does not exist"
	I0612 15:03:43.584596   13752 command_runner.go:130] ! I0612 21:39:44.456985       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0612 15:03:43.584596   13752 command_runner.go:130] ! I0612 21:39:44.457052       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0612 15:03:43.584596   13752 command_runner.go:130] ! I0612 21:39:44.460648       1 shared_informer.go:320] Caches are synced for GC
	I0612 15:03:43.584596   13752 command_runner.go:130] ! I0612 21:39:44.463138       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0612 15:03:43.584596   13752 command_runner.go:130] ! I0612 21:39:44.469825       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0612 15:03:43.584596   13752 command_runner.go:130] ! I0612 21:39:44.469846       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0612 15:03:43.584705   13752 command_runner.go:130] ! I0612 21:39:44.469856       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0612 15:03:43.584705   13752 command_runner.go:130] ! I0612 21:39:44.471608       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0612 15:03:43.584705   13752 command_runner.go:130] ! I0612 21:39:44.471748       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0612 15:03:43.584705   13752 command_runner.go:130] ! I0612 21:39:44.472789       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0612 15:03:43.584705   13752 command_runner.go:130] ! I0612 21:39:44.474041       1 shared_informer.go:320] Caches are synced for TTL
	I0612 15:03:43.584705   13752 command_runner.go:130] ! I0612 21:39:44.475483       1 shared_informer.go:320] Caches are synced for PVC protection
	I0612 15:03:43.584705   13752 command_runner.go:130] ! I0612 21:39:44.475505       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0612 15:03:43.584705   13752 command_runner.go:130] ! I0612 21:39:44.476080       1 shared_informer.go:320] Caches are synced for job
	I0612 15:03:43.584705   13752 command_runner.go:130] ! I0612 21:39:44.479252       1 shared_informer.go:320] Caches are synced for ephemeral
	I0612 15:03:43.584812   13752 command_runner.go:130] ! I0612 21:39:44.481788       1 shared_informer.go:320] Caches are synced for service account
	I0612 15:03:43.584812   13752 command_runner.go:130] ! I0612 21:39:44.488300       1 shared_informer.go:320] Caches are synced for persistent volume
	I0612 15:03:43.584812   13752 command_runner.go:130] ! I0612 21:39:44.491059       1 shared_informer.go:320] Caches are synced for namespace
	I0612 15:03:43.584812   13752 command_runner.go:130] ! I0612 21:39:44.499063       1 shared_informer.go:320] Caches are synced for cronjob
	I0612 15:03:43.584812   13752 command_runner.go:130] ! I0612 21:39:44.500304       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0612 15:03:43.584812   13752 command_runner.go:130] ! I0612 21:39:44.507471       1 shared_informer.go:320] Caches are synced for daemon sets
	I0612 15:03:43.584812   13752 command_runner.go:130] ! I0612 21:39:44.525355       1 shared_informer.go:320] Caches are synced for taint
	I0612 15:03:43.584901   13752 command_runner.go:130] ! I0612 21:39:44.525889       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0612 15:03:43.584901   13752 command_runner.go:130] ! I0612 21:39:44.526177       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000"
	I0612 15:03:43.584901   13752 command_runner.go:130] ! I0612 21:39:44.526390       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0612 15:03:43.584901   13752 command_runner.go:130] ! I0612 21:39:44.526550       1 shared_informer.go:320] Caches are synced for HPA
	I0612 15:03:43.584980   13752 command_runner.go:130] ! I0612 21:39:44.526951       1 shared_informer.go:320] Caches are synced for stateful set
	I0612 15:03:43.584980   13752 command_runner.go:130] ! I0612 21:39:44.527038       1 shared_informer.go:320] Caches are synced for deployment
	I0612 15:03:43.584980   13752 command_runner.go:130] ! I0612 21:39:44.528601       1 shared_informer.go:320] Caches are synced for PV protection
	I0612 15:03:43.584980   13752 command_runner.go:130] ! I0612 21:39:44.528834       1 shared_informer.go:320] Caches are synced for crt configmap
	I0612 15:03:43.584980   13752 command_runner.go:130] ! I0612 21:39:44.531261       1 shared_informer.go:320] Caches are synced for node
	I0612 15:03:43.585061   13752 command_runner.go:130] ! I0612 21:39:44.531462       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0612 15:03:43.585110   13752 command_runner.go:130] ! I0612 21:39:44.531679       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0612 15:03:43.585217   13752 command_runner.go:130] ! I0612 21:39:44.531942       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0612 15:03:43.585247   13752 command_runner.go:130] ! I0612 21:39:44.532097       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0612 15:03:43.585247   13752 command_runner.go:130] ! I0612 21:39:44.532523       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0612 15:03:43.585247   13752 command_runner.go:130] ! I0612 21:39:44.537873       1 shared_informer.go:320] Caches are synced for expand
	I0612 15:03:43.585247   13752 command_runner.go:130] ! I0612 21:39:44.543447       1 shared_informer.go:320] Caches are synced for attach detach
	I0612 15:03:43.585291   13752 command_runner.go:130] ! I0612 21:39:44.564610       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0612 15:03:43.585329   13752 command_runner.go:130] ! I0612 21:39:44.568950       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025000" podCIDRs=["10.244.0.0/24"]
	I0612 15:03:43.585329   13752 command_runner.go:130] ! I0612 21:39:44.621264       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0612 15:03:43.585329   13752 command_runner.go:130] ! I0612 21:39:44.644803       1 shared_informer.go:320] Caches are synced for endpoint
	I0612 15:03:43.585388   13752 command_runner.go:130] ! I0612 21:39:44.677466       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0612 15:03:43.585421   13752 command_runner.go:130] ! I0612 21:39:44.696400       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 15:03:43.585421   13752 command_runner.go:130] ! I0612 21:39:44.723303       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0612 15:03:43.585421   13752 command_runner.go:130] ! I0612 21:39:44.735837       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 15:03:43.585421   13752 command_runner.go:130] ! I0612 21:39:44.758870       1 shared_informer.go:320] Caches are synced for disruption
	I0612 15:03:43.585472   13752 command_runner.go:130] ! I0612 21:39:45.157877       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 15:03:43.585472   13752 command_runner.go:130] ! I0612 21:39:45.226557       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 15:03:43.585472   13752 command_runner.go:130] ! I0612 21:39:45.226973       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0612 15:03:43.585472   13752 command_runner.go:130] ! I0612 21:39:45.795416       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="243.746414ms"
	I0612 15:03:43.585472   13752 command_runner.go:130] ! I0612 21:39:45.868449       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.90937ms"
	I0612 15:03:43.585607   13752 command_runner.go:130] ! I0612 21:39:45.868845       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="122.402µs"
	I0612 15:03:43.585629   13752 command_runner.go:130] ! I0612 21:39:45.869382       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="206.903µs"
	I0612 15:03:43.585629   13752 command_runner.go:130] ! I0612 21:39:45.905402       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="386.807µs"
	I0612 15:03:43.585629   13752 command_runner.go:130] ! I0612 21:39:46.349409       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="105.452815ms"
	I0612 15:03:43.585629   13752 command_runner.go:130] ! I0612 21:39:46.386321       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.301621ms"
	I0612 15:03:43.585720   13752 command_runner.go:130] ! I0612 21:39:46.386974       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="616.309µs"
	I0612 15:03:43.585746   13752 command_runner.go:130] ! I0612 21:39:56.441072       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="366.601µs"
	I0612 15:03:43.585785   13752 command_runner.go:130] ! I0612 21:39:56.465727       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.4µs"
	I0612 15:03:43.585785   13752 command_runner.go:130] ! I0612 21:39:57.870560       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="68.5µs"
	I0612 15:03:43.585824   13752 command_runner.go:130] ! I0612 21:39:58.874445       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.448319ms"
	I0612 15:03:43.585854   13752 command_runner.go:130] ! I0612 21:39:58.875168       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="103.901µs"
	I0612 15:03:43.585903   13752 command_runner.go:130] ! I0612 21:39:59.529553       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0612 15:03:43.585903   13752 command_runner.go:130] ! I0612 21:42:39.169243       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m02\" does not exist"
	I0612 15:03:43.585903   13752 command_runner.go:130] ! I0612 21:42:39.188142       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025000-m02" podCIDRs=["10.244.1.0/24"]
	I0612 15:03:43.585980   13752 command_runner.go:130] ! I0612 21:42:39.563565       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000-m02"
	I0612 15:03:43.585980   13752 command_runner.go:130] ! I0612 21:42:58.063730       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:43.586006   13752 command_runner.go:130] ! I0612 21:43:24.138579       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.052538ms"
	I0612 15:03:43.586006   13752 command_runner.go:130] ! I0612 21:43:24.156190       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.434267ms"
	I0612 15:03:43.586079   13752 command_runner.go:130] ! I0612 21:43:24.156677       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.099µs"
	I0612 15:03:43.586079   13752 command_runner.go:130] ! I0612 21:43:24.183391       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.299µs"
	I0612 15:03:43.586079   13752 command_runner.go:130] ! I0612 21:43:26.908415       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.051448ms"
	I0612 15:03:43.586162   13752 command_runner.go:130] ! I0612 21:43:26.908853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34µs"
	I0612 15:03:43.586162   13752 command_runner.go:130] ! I0612 21:43:27.296932       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.474956ms"
	I0612 15:03:43.586162   13752 command_runner.go:130] ! I0612 21:43:27.304566       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.488944ms"
	I0612 15:03:43.586327   13752 command_runner.go:130] ! I0612 21:47:16.485552       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:43.586327   13752 command_runner.go:130] ! I0612 21:47:16.486568       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m03\" does not exist"
	I0612 15:03:43.586327   13752 command_runner.go:130] ! I0612 21:47:16.503987       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025000-m03" podCIDRs=["10.244.2.0/24"]
	I0612 15:03:43.586397   13752 command_runner.go:130] ! I0612 21:47:19.629018       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000-m03"
	I0612 15:03:43.586397   13752 command_runner.go:130] ! I0612 21:47:35.032365       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:43.586459   13752 command_runner.go:130] ! I0612 21:55:19.767980       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:43.586459   13752 command_runner.go:130] ! I0612 21:57:52.374240       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:43.586459   13752 command_runner.go:130] ! I0612 21:57:58.774442       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m03\" does not exist"
	I0612 15:03:43.586459   13752 command_runner.go:130] ! I0612 21:57:58.774588       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:43.586459   13752 command_runner.go:130] ! I0612 21:57:58.809041       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025000-m03" podCIDRs=["10.244.3.0/24"]
	I0612 15:03:43.586459   13752 command_runner.go:130] ! I0612 21:58:06.126407       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:43.586459   13752 command_runner.go:130] ! I0612 21:59:45.222238       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:43.601714   13752 logs.go:123] Gathering logs for kindnet [cccfd1e9fef5] ...
	I0612 15:03:43.601714   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccfd1e9fef5"
	I0612 15:03:43.634705   13752 command_runner.go:130] ! I0612 22:02:33.621070       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0612 15:03:43.635316   13752 command_runner.go:130] ! I0612 22:02:33.621857       1 main.go:107] hostIP = 172.23.200.184
	I0612 15:03:43.635316   13752 command_runner.go:130] ! podIP = 172.23.200.184
	I0612 15:03:43.635378   13752 command_runner.go:130] ! I0612 22:02:33.622055       1 main.go:116] setting mtu 1500 for CNI 
	I0612 15:03:43.635378   13752 command_runner.go:130] ! I0612 22:02:33.622069       1 main.go:146] kindnetd IP family: "ipv4"
	I0612 15:03:43.635378   13752 command_runner.go:130] ! I0612 22:02:33.622082       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0612 15:03:43.635378   13752 command_runner.go:130] ! I0612 22:03:03.928722       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0612 15:03:43.635378   13752 command_runner.go:130] ! I0612 22:03:03.948068       1 main.go:223] Handling node with IPs: map[172.23.200.184:{}]
	I0612 15:03:43.635455   13752 command_runner.go:130] ! I0612 22:03:03.948207       1 main.go:227] handling current node
	I0612 15:03:43.635455   13752 command_runner.go:130] ! I0612 22:03:04.015006       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:43.635511   13752 command_runner.go:130] ! I0612 22:03:04.015280       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:43.635511   13752 command_runner.go:130] ! I0612 22:03:04.015617       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.23.196.105 Flags: [] Table: 0} 
	I0612 15:03:43.635604   13752 command_runner.go:130] ! I0612 22:03:04.015960       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:43.635604   13752 command_runner.go:130] ! I0612 22:03:04.015976       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:43.635604   13752 command_runner.go:130] ! I0612 22:03:04.016053       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.23.206.72 Flags: [] Table: 0} 
	I0612 15:03:43.635649   13752 command_runner.go:130] ! I0612 22:03:14.032118       1 main.go:223] Handling node with IPs: map[172.23.200.184:{}]
	I0612 15:03:43.635649   13752 command_runner.go:130] ! I0612 22:03:14.032228       1 main.go:227] handling current node
	I0612 15:03:43.635649   13752 command_runner.go:130] ! I0612 22:03:14.032243       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:43.635696   13752 command_runner.go:130] ! I0612 22:03:14.032255       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:43.635696   13752 command_runner.go:130] ! I0612 22:03:14.032739       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:43.635740   13752 command_runner.go:130] ! I0612 22:03:14.032836       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:43.635740   13752 command_runner.go:130] ! I0612 22:03:24.045393       1 main.go:223] Handling node with IPs: map[172.23.200.184:{}]
	I0612 15:03:43.635740   13752 command_runner.go:130] ! I0612 22:03:24.045492       1 main.go:227] handling current node
	I0612 15:03:43.635791   13752 command_runner.go:130] ! I0612 22:03:24.045504       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:43.635791   13752 command_runner.go:130] ! I0612 22:03:24.045510       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:43.635791   13752 command_runner.go:130] ! I0612 22:03:24.045926       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:43.635850   13752 command_runner.go:130] ! I0612 22:03:24.045941       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:43.635896   13752 command_runner.go:130] ! I0612 22:03:34.052186       1 main.go:223] Handling node with IPs: map[172.23.200.184:{}]
	I0612 15:03:43.635896   13752 command_runner.go:130] ! I0612 22:03:34.052288       1 main.go:227] handling current node
	I0612 15:03:43.635896   13752 command_runner.go:130] ! I0612 22:03:34.052302       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:43.635935   13752 command_runner.go:130] ! I0612 22:03:34.052309       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:43.635935   13752 command_runner.go:130] ! I0612 22:03:34.052423       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:43.635991   13752 command_runner.go:130] ! I0612 22:03:34.052452       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:43.639582   13752 logs.go:123] Gathering logs for kubelet ...
	I0612 15:03:43.639625   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 15:03:43.673570   13752 command_runner.go:130] > Jun 12 22:02:21 multinode-025000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0612 15:03:43.674608   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1381]: I0612 22:02:22.063456    1381 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0612 15:03:43.674608   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1381]: I0612 22:02:22.064093    1381 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:43.674703   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1381]: I0612 22:02:22.064387    1381 server.go:927] "Client rotation is on, will bootstrap in background"
	I0612 15:03:43.674703   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1381]: E0612 22:02:22.065868    1381 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0612 15:03:43.674703   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0612 15:03:43.674783   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0612 15:03:43.674783   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0612 15:03:43.674783   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0612 15:03:43.674783   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0612 15:03:43.674783   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1437]: I0612 22:02:22.789327    1437 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0612 15:03:43.674863   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1437]: I0612 22:02:22.789465    1437 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:43.674863   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1437]: I0612 22:02:22.790480    1437 server.go:927] "Client rotation is on, will bootstrap in background"
	I0612 15:03:43.674863   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1437]: E0612 22:02:22.790564    1437 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0612 15:03:43.674863   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0612 15:03:43.674963   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0612 15:03:43.674963   13752 command_runner.go:130] > Jun 12 22:02:23 multinode-025000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0612 15:03:43.674963   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0612 15:03:43.674963   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.414046    1517 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0612 15:03:43.675044   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.414147    1517 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:43.675044   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.414632    1517 server.go:927] "Client rotation is on, will bootstrap in background"
	I0612 15:03:43.675044   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.416608    1517 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0612 15:03:43.675044   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.437750    1517 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0612 15:03:43.675044   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.458497    1517 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0612 15:03:43.675120   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.458849    1517 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0612 15:03:43.675120   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.460038    1517 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0612 15:03:43.675200   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.460095    1517 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-025000","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0612 15:03:43.675277   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.464057    1517 topology_manager.go:138] "Creating topology manager with none policy"
	I0612 15:03:43.675277   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.464080    1517 container_manager_linux.go:301] "Creating device plugin manager"
	I0612 15:03:43.675277   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.464924    1517 state_mem.go:36] "Initialized new in-memory state store"
	I0612 15:03:43.675277   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.466519    1517 kubelet.go:400] "Attempting to sync node with API server"
	I0612 15:03:43.675277   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.466546    1517 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0612 15:03:43.675368   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.466613    1517 kubelet.go:312] "Adding apiserver pod source"
	I0612 15:03:43.675368   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.467352    1517 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0612 15:03:43.675368   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: W0612 22:02:25.471384    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-025000&limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:43.675368   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.471502    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-025000&limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:43.675454   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.471869    1517 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.1.4" apiVersion="v1"
	I0612 15:03:43.675454   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.477415    1517 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0612 15:03:43.675454   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: W0612 22:02:25.478424    1517 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0612 15:03:43.675534   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.480523    1517 server.go:1264] "Started kubelet"
	I0612 15:03:43.675534   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: W0612 22:02:25.481568    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:43.675534   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.481666    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:43.675611   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.481865    1517 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0612 15:03:43.675611   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.482789    1517 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0612 15:03:43.675611   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.485497    1517 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0612 15:03:43.675720   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.490040    1517 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.23.200.184:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-025000.17d860d995e00c7b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-025000,UID:multinode-025000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-025000,},FirstTimestamp:2024-06-12 22:02:25.480502395 +0000 UTC m=+0.149388345,LastTimestamp:2024-06-12 22:02:25.480502395 +0000 UTC m=+0.149388345,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-0
25000,}"
	I0612 15:03:43.675764   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.493219    1517 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0612 15:03:43.675764   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.495119    1517 server.go:455] "Adding debug handlers to kubelet server"
	I0612 15:03:43.675764   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.496095    1517 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0612 15:03:43.675764   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.498560    1517 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0612 15:03:43.675764   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.501388    1517 factory.go:221] Registration of the systemd container factory successfully
	I0612 15:03:43.675847   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.501556    1517 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0612 15:03:43.675847   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.501657    1517 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0612 15:03:43.675847   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: W0612 22:02:25.510641    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:43.675931   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.510706    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:43.675931   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.521028    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-025000?timeout=10s\": dial tcp 172.23.200.184:8443: connect: connection refused" interval="200ms"
	I0612 15:03:43.676027   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.554579    1517 reconciler.go:26] "Reconciler: start to sync state"
	I0612 15:03:43.676027   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.594809    1517 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0612 15:03:43.676027   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.595077    1517 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0612 15:03:43.676027   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.595178    1517 state_mem.go:36] "Initialized new in-memory state store"
	I0612 15:03:43.676083   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.598081    1517 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0612 15:03:43.676083   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.598418    1517 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0612 15:03:43.676129   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.598595    1517 policy_none.go:49] "None policy: Start"
	I0612 15:03:43.676129   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.600760    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-025000"
	I0612 15:03:43.676129   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.602144    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.200.184:8443: connect: connection refused" node="multinode-025000"
	I0612 15:03:43.676129   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.610755    1517 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0612 15:03:43.676205   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.610783    1517 state_mem.go:35] "Initializing new in-memory state store"
	I0612 15:03:43.676205   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.610843    1517 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0612 15:03:43.676205   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.611758    1517 state_mem.go:75] "Updated machine memory state"
	I0612 15:03:43.676205   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.613995    1517 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0612 15:03:43.676281   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.614216    1517 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0612 15:03:43.676281   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.615027    1517 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0612 15:03:43.676281   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.615636    1517 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0612 15:03:43.676281   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.615685    1517 kubelet.go:2337] "Starting kubelet main sync loop"
	I0612 15:03:43.676355   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.615730    1517 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0612 15:03:43.676355   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.616221    1517 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0612 15:03:43.676355   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: W0612 22:02:25.632621    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:43.676355   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.632711    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:43.676435   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.634150    1517 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-025000\" not found"
	I0612 15:03:43.676435   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.644874    1517 iptables.go:577] "Could not set up iptables canary" err=<
	I0612 15:03:43.676435   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0612 15:03:43.676528   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0612 15:03:43.676528   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0612 15:03:43.676528   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0612 15:03:43.676601   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.717070    1517 topology_manager.go:215] "Topology Admit Handler" podUID="d6071cd4356268889f798790dc93ce06" podNamespace="kube-system" podName="kube-apiserver-multinode-025000"
	I0612 15:03:43.676601   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.719714    1517 topology_manager.go:215] "Topology Admit Handler" podUID="88de11d8b1aaec126153d44e87c4b5dd" podNamespace="kube-system" podName="kube-controller-manager-multinode-025000"
	I0612 15:03:43.676601   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.720740    1517 topology_manager.go:215] "Topology Admit Handler" podUID="de62e7fd7d0feea82620e745032c1a67" podNamespace="kube-system" podName="kube-scheduler-multinode-025000"
	I0612 15:03:43.676684   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.722295    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-025000?timeout=10s\": dial tcp 172.23.200.184:8443: connect: connection refused" interval="400ms"
	I0612 15:03:43.676684   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.724629    1517 topology_manager.go:215] "Topology Admit Handler" podUID="7b6b5637642f3d915c0db1461c7074e6" podNamespace="kube-system" podName="etcd-multinode-025000"
	I0612 15:03:43.676760   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.725657    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fad98f611536b15941d0f49c694b6b6c39318bca8a66620735a88a81a12d3610"
	I0612 15:03:43.676760   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.725708    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb4351fab502e49592d49234119b810b53c5916eaf732d4ba148b3ad1eed4e6a"
	I0612 15:03:43.676760   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.725720    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b9e051df48486e732da2c72bf2d0e3ec93cf8774632ecedd8825e656ba04a93"
	I0612 15:03:43.676836   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.725728    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2784305b1d5e9a088f0b73ff004b2d9eca305d397de3d7b9912638323d7c66b2"
	I0612 15:03:43.676836   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.725737    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40443305b24f54fea9235d98bfb16f2d550b8914bfa46c0592b5c24be1ad5569"
	I0612 15:03:43.676836   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.736677    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9933fdc9ca72b65b57e5b4b996215763431b87f18af45fdc8195252497e1d9a"
	I0612 15:03:43.676912   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.760928    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="894c58e9fe752e78b8e86cbbaabc1b6cc78ebcce37e4fc0bf1d838420f80a94d"
	I0612 15:03:43.676912   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.777475    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84a9b747663ca262bb35bb462ba83da0c104aee08928bd92a44297ee225d4c27"
	I0612 15:03:43.676912   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.794474    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92f2d5f19e95ea2d1cfe140159a55c94f5d809c3b67661196b1e285ac389537f"
	I0612 15:03:43.676912   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.803790    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-025000"
	I0612 15:03:43.676987   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.804820    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.200.184:8443: connect: connection refused" node="multinode-025000"
	I0612 15:03:43.676987   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885533    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/88de11d8b1aaec126153d44e87c4b5dd-ca-certs\") pod \"kube-controller-manager-multinode-025000\" (UID: \"88de11d8b1aaec126153d44e87c4b5dd\") " pod="kube-system/kube-controller-manager-multinode-025000"
	I0612 15:03:43.677069   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885705    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d6071cd4356268889f798790dc93ce06-ca-certs\") pod \"kube-apiserver-multinode-025000\" (UID: \"d6071cd4356268889f798790dc93ce06\") " pod="kube-system/kube-apiserver-multinode-025000"
	I0612 15:03:43.677069   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885746    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d6071cd4356268889f798790dc93ce06-k8s-certs\") pod \"kube-apiserver-multinode-025000\" (UID: \"d6071cd4356268889f798790dc93ce06\") " pod="kube-system/kube-apiserver-multinode-025000"
	I0612 15:03:43.677233   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885768    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/88de11d8b1aaec126153d44e87c4b5dd-k8s-certs\") pod \"kube-controller-manager-multinode-025000\" (UID: \"88de11d8b1aaec126153d44e87c4b5dd\") " pod="kube-system/kube-controller-manager-multinode-025000"
	I0612 15:03:43.677335   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885803    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/88de11d8b1aaec126153d44e87c4b5dd-kubeconfig\") pod \"kube-controller-manager-multinode-025000\" (UID: \"88de11d8b1aaec126153d44e87c4b5dd\") " pod="kube-system/kube-controller-manager-multinode-025000"
	I0612 15:03:43.677335   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885844    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/88de11d8b1aaec126153d44e87c4b5dd-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-025000\" (UID: \"88de11d8b1aaec126153d44e87c4b5dd\") " pod="kube-system/kube-controller-manager-multinode-025000"
	I0612 15:03:43.677421   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885869    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/de62e7fd7d0feea82620e745032c1a67-kubeconfig\") pod \"kube-scheduler-multinode-025000\" (UID: \"de62e7fd7d0feea82620e745032c1a67\") " pod="kube-system/kube-scheduler-multinode-025000"
	I0612 15:03:43.677421   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885941    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/7b6b5637642f3d915c0db1461c7074e6-etcd-certs\") pod \"etcd-multinode-025000\" (UID: \"7b6b5637642f3d915c0db1461c7074e6\") " pod="kube-system/etcd-multinode-025000"
	I0612 15:03:43.677512   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885970    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/7b6b5637642f3d915c0db1461c7074e6-etcd-data\") pod \"etcd-multinode-025000\" (UID: \"7b6b5637642f3d915c0db1461c7074e6\") " pod="kube-system/etcd-multinode-025000"
	I0612 15:03:43.677512   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885997    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d6071cd4356268889f798790dc93ce06-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-025000\" (UID: \"d6071cd4356268889f798790dc93ce06\") " pod="kube-system/kube-apiserver-multinode-025000"
	I0612 15:03:43.677639   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.886023    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/88de11d8b1aaec126153d44e87c4b5dd-flexvolume-dir\") pod \"kube-controller-manager-multinode-025000\" (UID: \"88de11d8b1aaec126153d44e87c4b5dd\") " pod="kube-system/kube-controller-manager-multinode-025000"
	I0612 15:03:43.677639   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.124157    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-025000?timeout=10s\": dial tcp 172.23.200.184:8443: connect: connection refused" interval="800ms"
	I0612 15:03:43.677744   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: I0612 22:02:26.206204    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-025000"
	I0612 15:03:43.677826   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.207259    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.200.184:8443: connect: connection refused" node="multinode-025000"
	I0612 15:03:43.677826   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: W0612 22:02:26.576346    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-025000&limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:43.677826   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.576490    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-025000&limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:43.677942   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: W0612 22:02:26.832319    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:43.677942   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.832430    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:43.678020   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: W0612 22:02:26.847085    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:43.678020   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.847226    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:43.678101   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: W0612 22:02:26.894179    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:43.678101   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.894251    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:43.678178   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: I0612 22:02:26.910045    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76517193a960ab9d78db3449c72d4b8285bbf321f947b06f8088487d36423fd7"
	I0612 15:03:43.678178   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.925848    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-025000?timeout=10s\": dial tcp 172.23.200.184:8443: connect: connection refused" interval="1.6s"
	I0612 15:03:43.678260   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.967442    1517 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.23.200.184:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-025000.17d860d995e00c7b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-025000,UID:multinode-025000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-025000,},FirstTimestamp:2024-06-12 22:02:25.480502395 +0000 UTC m=+0.149388345,LastTimestamp:2024-06-12 22:02:25.480502395 +0000 UTC m=+0.149388345,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-0
25000,}"
	I0612 15:03:43.678260   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 kubelet[1517]: I0612 22:02:27.008640    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-025000"
	I0612 15:03:43.678260   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 kubelet[1517]: E0612 22:02:27.009541    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.200.184:8443: connect: connection refused" node="multinode-025000"
	I0612 15:03:43.678338   13752 command_runner.go:130] > Jun 12 22:02:28 multinode-025000 kubelet[1517]: I0612 22:02:28.611782    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-025000"
	I0612 15:03:43.678338   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.067503    1517 kubelet_node_status.go:112] "Node was previously registered" node="multinode-025000"
	I0612 15:03:43.678338   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.069193    1517 kubelet_node_status.go:76] "Successfully registered node" node="multinode-025000"
	I0612 15:03:43.678419   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.078543    1517 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0612 15:03:43.678419   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.083746    1517 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0612 15:03:43.678419   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.087512    1517 setters.go:580] "Node became not ready" node="multinode-025000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-12T22:02:31Z","lastTransitionTime":"2024-06-12T22:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0612 15:03:43.678496   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.485482    1517 apiserver.go:52] "Watching apiserver"
	I0612 15:03:43.678496   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.491838    1517 topology_manager.go:215] "Topology Admit Handler" podUID="1f004a05-3f5f-444b-9ac0-88f0e23da904" podNamespace="kube-system" podName="kindnet-bqlg8"
	I0612 15:03:43.678496   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.492246    1517 topology_manager.go:215] "Topology Admit Handler" podUID="10b24fa7-8eea-4fbb-ab18-404e853aa7ab" podNamespace="kube-system" podName="kube-proxy-47lr8"
	I0612 15:03:43.678496   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.493249    1517 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-025000" podUID="6b429685-b322-4b00-83fc-743786ff40e1"
	I0612 15:03:43.678576   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.494355    1517 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-025000" podUID="630bafc4-4576-4974-b638-7ab52dcfec18"
	I0612 15:03:43.678652   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.494642    1517 topology_manager.go:215] "Topology Admit Handler" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-vgcxw"
	I0612 15:03:43.678652   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.494763    1517 topology_manager.go:215] "Topology Admit Handler" podUID="d20f7489-1aa1-44b8-9221-4d1849884be4" podNamespace="kube-system" podName="storage-provisioner"
	I0612 15:03:43.678726   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.494876    1517 topology_manager.go:215] "Topology Admit Handler" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4" podNamespace="default" podName="busybox-fc5497c4f-45qqd"
	I0612 15:03:43.678787   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.495127    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.678787   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.495306    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.678787   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.499353    1517 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0612 15:03:43.678787   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.541672    1517 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-025000"
	I0612 15:03:43.678787   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.557538    1517 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-025000"
	I0612 15:03:43.678787   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593012    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1f004a05-3f5f-444b-9ac0-88f0e23da904-cni-cfg\") pod \"kindnet-bqlg8\" (UID: \"1f004a05-3f5f-444b-9ac0-88f0e23da904\") " pod="kube-system/kindnet-bqlg8"
	I0612 15:03:43.678787   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593075    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10b24fa7-8eea-4fbb-ab18-404e853aa7ab-lib-modules\") pod \"kube-proxy-47lr8\" (UID: \"10b24fa7-8eea-4fbb-ab18-404e853aa7ab\") " pod="kube-system/kube-proxy-47lr8"
	I0612 15:03:43.678787   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593188    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f004a05-3f5f-444b-9ac0-88f0e23da904-lib-modules\") pod \"kindnet-bqlg8\" (UID: \"1f004a05-3f5f-444b-9ac0-88f0e23da904\") " pod="kube-system/kindnet-bqlg8"
	I0612 15:03:43.678787   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593684    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d20f7489-1aa1-44b8-9221-4d1849884be4-tmp\") pod \"storage-provisioner\" (UID: \"d20f7489-1aa1-44b8-9221-4d1849884be4\") " pod="kube-system/storage-provisioner"
	I0612 15:03:43.678787   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593711    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f004a05-3f5f-444b-9ac0-88f0e23da904-xtables-lock\") pod \"kindnet-bqlg8\" (UID: \"1f004a05-3f5f-444b-9ac0-88f0e23da904\") " pod="kube-system/kindnet-bqlg8"
	I0612 15:03:43.678787   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593752    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10b24fa7-8eea-4fbb-ab18-404e853aa7ab-xtables-lock\") pod \"kube-proxy-47lr8\" (UID: \"10b24fa7-8eea-4fbb-ab18-404e853aa7ab\") " pod="kube-system/kube-proxy-47lr8"
	I0612 15:03:43.678787   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.594460    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:43.678787   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.594613    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:02:32.094549489 +0000 UTC m=+6.763435539 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:43.678787   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.622682    1517 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04dcbc8e258f964f689941b6844769d9" path="/var/lib/kubelet/pods/04dcbc8e258f964f689941b6844769d9/volumes"
	I0612 15:03:43.678787   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.623801    1517 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="610414aa8160848c0b6b79ea0a700b83" path="/var/lib/kubelet/pods/610414aa8160848c0b6b79ea0a700b83/volumes"
	I0612 15:03:43.678787   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.626972    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.679371   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.627014    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.679371   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.627132    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:02:32.127114564 +0000 UTC m=+6.796000614 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.679371   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.673848    1517 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-025000" podStartSLOduration=0.673800971 podStartE2EDuration="673.800971ms" podCreationTimestamp="2024-06-12 22:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-12 22:02:31.632162175 +0000 UTC m=+6.301048225" watchObservedRunningTime="2024-06-12 22:02:31.673800971 +0000 UTC m=+6.342686921"
	I0612 15:03:43.679557   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.674234    1517 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-025000" podStartSLOduration=0.674226172 podStartE2EDuration="674.226172ms" podCreationTimestamp="2024-06-12 22:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-12 22:02:31.67337587 +0000 UTC m=+6.342261920" watchObservedRunningTime="2024-06-12 22:02:31.674226172 +0000 UTC m=+6.343112222"
	I0612 15:03:43.679592   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: E0612 22:02:32.099190    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:43.679592   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: E0612 22:02:32.099284    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:02:33.099266752 +0000 UTC m=+7.768152702 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:43.679592   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: E0612 22:02:32.199774    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.679592   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: E0612 22:02:32.199808    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.680251   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: E0612 22:02:32.199864    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:02:33.199845384 +0000 UTC m=+7.868731334 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.680379   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: I0612 22:02:32.394461    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5287b61207e62a3ec16408b08af503462a8bed945d441422fd0b733e752d6217"
	I0612 15:03:43.680430   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: I0612 22:02:32.774495    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a20975d81b350d77bb2d9d69d861d19ddbcbab33211643f61e2aaa0d6dc46a9d"
	I0612 15:03:43.680430   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: I0612 22:02:32.791274    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="435c56b0fbbbb46e4b392ac6467c2054ce16271a6b3dad2d53f747f839b4b3cd"
	I0612 15:03:43.680430   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.106313    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:43.680430   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.106394    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:02:35.106375874 +0000 UTC m=+9.775261924 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:43.680430   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.208318    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.680430   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.208375    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.680430   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.208431    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:02:35.208413609 +0000 UTC m=+9.877299559 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.680430   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.617822    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.680430   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.618103    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.680430   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.125562    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:43.680430   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.126376    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:02:39.12633293 +0000 UTC m=+13.795218980 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:43.680430   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.226548    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.680430   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.226607    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.680430   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.226693    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:02:39.226674161 +0000 UTC m=+13.895560111 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.680430   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.616712    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.680430   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.617047    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:43.681015   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.617270    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.681078   13752 command_runner.go:130] > Jun 12 22:02:37 multinode-025000 kubelet[1517]: E0612 22:02:37.618147    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.681078   13752 command_runner.go:130] > Jun 12 22:02:37 multinode-025000 kubelet[1517]: E0612 22:02:37.618607    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.681231   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.164650    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:43.681255   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.164956    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:02:47.164935524 +0000 UTC m=+21.833821574 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:43.681318   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.265764    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.681318   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.266004    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.681318   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.266086    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:02:47.266062158 +0000 UTC m=+21.934948208 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.681318   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.616548    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.681318   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.617577    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.681318   13752 command_runner.go:130] > Jun 12 22:02:40 multinode-025000 kubelet[1517]: E0612 22:02:40.619032    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:43.681318   13752 command_runner.go:130] > Jun 12 22:02:41 multinode-025000 kubelet[1517]: E0612 22:02:41.617010    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.681318   13752 command_runner.go:130] > Jun 12 22:02:41 multinode-025000 kubelet[1517]: E0612 22:02:41.617816    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.681318   13752 command_runner.go:130] > Jun 12 22:02:43 multinode-025000 kubelet[1517]: E0612 22:02:43.617105    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.681318   13752 command_runner.go:130] > Jun 12 22:02:43 multinode-025000 kubelet[1517]: E0612 22:02:43.617755    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.681318   13752 command_runner.go:130] > Jun 12 22:02:45 multinode-025000 kubelet[1517]: E0612 22:02:45.617112    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.681318   13752 command_runner.go:130] > Jun 12 22:02:45 multinode-025000 kubelet[1517]: E0612 22:02:45.618034    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.681318   13752 command_runner.go:130] > Jun 12 22:02:45 multinode-025000 kubelet[1517]: E0612 22:02:45.621402    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:43.681318   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.234271    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:43.681318   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.234420    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:03:03.234402815 +0000 UTC m=+37.903288765 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:43.681318   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.335532    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.681858   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.335632    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.681923   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.335696    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:03:03.33568009 +0000 UTC m=+38.004566140 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.681923   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.617048    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.682076   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.617530    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.682120   13752 command_runner.go:130] > Jun 12 22:02:49 multinode-025000 kubelet[1517]: E0612 22:02:49.617040    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.682186   13752 command_runner.go:130] > Jun 12 22:02:49 multinode-025000 kubelet[1517]: E0612 22:02:49.617673    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.682218   13752 command_runner.go:130] > Jun 12 22:02:50 multinode-025000 kubelet[1517]: E0612 22:02:50.623368    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:43.682252   13752 command_runner.go:130] > Jun 12 22:02:51 multinode-025000 kubelet[1517]: E0612 22:02:51.616848    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.682350   13752 command_runner.go:130] > Jun 12 22:02:51 multinode-025000 kubelet[1517]: E0612 22:02:51.617656    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.682380   13752 command_runner.go:130] > Jun 12 22:02:53 multinode-025000 kubelet[1517]: E0612 22:02:53.617130    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.682422   13752 command_runner.go:130] > Jun 12 22:02:53 multinode-025000 kubelet[1517]: E0612 22:02:53.617679    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.682496   13752 command_runner.go:130] > Jun 12 22:02:55 multinode-025000 kubelet[1517]: E0612 22:02:55.617082    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.682526   13752 command_runner.go:130] > Jun 12 22:02:55 multinode-025000 kubelet[1517]: E0612 22:02:55.617595    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.682559   13752 command_runner.go:130] > Jun 12 22:02:55 multinode-025000 kubelet[1517]: E0612 22:02:55.624795    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:43.682622   13752 command_runner.go:130] > Jun 12 22:02:57 multinode-025000 kubelet[1517]: E0612 22:02:57.617430    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.682622   13752 command_runner.go:130] > Jun 12 22:02:57 multinode-025000 kubelet[1517]: E0612 22:02:57.618180    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.682622   13752 command_runner.go:130] > Jun 12 22:02:59 multinode-025000 kubelet[1517]: E0612 22:02:59.616577    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.682622   13752 command_runner.go:130] > Jun 12 22:02:59 multinode-025000 kubelet[1517]: E0612 22:02:59.617339    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.682622   13752 command_runner.go:130] > Jun 12 22:03:00 multinode-025000 kubelet[1517]: E0612 22:03:00.626741    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:43.682622   13752 command_runner.go:130] > Jun 12 22:03:01 multinode-025000 kubelet[1517]: E0612 22:03:01.617176    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.682622   13752 command_runner.go:130] > Jun 12 22:03:01 multinode-025000 kubelet[1517]: E0612 22:03:01.617573    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.682622   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: I0612 22:03:03.236005    1517 scope.go:117] "RemoveContainer" containerID="61910369e0d4ba1a5246a686e904c168fc7467d239e475004146ddf2835e8e78"
	I0612 15:03:43.682622   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: I0612 22:03:03.236962    1517 scope.go:117] "RemoveContainer" containerID="3546a5c00321078fed32a806a318f4e56e89801ea54ea9463adf37f82327b38a"
	I0612 15:03:43.682622   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.239739    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d20f7489-1aa1-44b8-9221-4d1849884be4)\"" pod="kube-system/storage-provisioner" podUID="d20f7489-1aa1-44b8-9221-4d1849884be4"
	I0612 15:03:43.682622   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.284341    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:43.682622   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.284420    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:03:35.284401461 +0000 UTC m=+69.953287411 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:43.682622   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.385432    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.682622   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.385531    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.682622   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.385613    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:03:35.385594617 +0000 UTC m=+70.054480667 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.682622   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.616668    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.683200   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.617100    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.683240   13752 command_runner.go:130] > Jun 12 22:03:05 multinode-025000 kubelet[1517]: E0612 22:03:05.617214    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.683240   13752 command_runner.go:130] > Jun 12 22:03:05 multinode-025000 kubelet[1517]: E0612 22:03:05.617674    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.683240   13752 command_runner.go:130] > Jun 12 22:03:05 multinode-025000 kubelet[1517]: E0612 22:03:05.628542    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:43.683240   13752 command_runner.go:130] > Jun 12 22:03:07 multinode-025000 kubelet[1517]: E0612 22:03:07.616455    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.683240   13752 command_runner.go:130] > Jun 12 22:03:07 multinode-025000 kubelet[1517]: E0612 22:03:07.617581    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.683240   13752 command_runner.go:130] > Jun 12 22:03:09 multinode-025000 kubelet[1517]: E0612 22:03:09.617093    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.683240   13752 command_runner.go:130] > Jun 12 22:03:09 multinode-025000 kubelet[1517]: E0612 22:03:09.617405    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.683240   13752 command_runner.go:130] > Jun 12 22:03:13 multinode-025000 kubelet[1517]: I0612 22:03:13.617647    1517 scope.go:117] "RemoveContainer" containerID="3546a5c00321078fed32a806a318f4e56e89801ea54ea9463adf37f82327b38a"
	I0612 15:03:43.683240   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]: I0612 22:03:25.637114    1517 scope.go:117] "RemoveContainer" containerID="0749f44d03561395230c8a60a41853a49502741bf3bcd45acc924d346061f5b0"
	I0612 15:03:43.683240   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]: E0612 22:03:25.663119    1517 iptables.go:577] "Could not set up iptables canary" err=<
	I0612 15:03:43.683240   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0612 15:03:43.683240   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0612 15:03:43.683240   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0612 15:03:43.683240   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0612 15:03:43.683240   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]: I0612 22:03:25.699754    1517 scope.go:117] "RemoveContainer" containerID="2455f315465b9508a3fe1025d7150342eedb3cb09eb5f8fd9b2cbbffe1306db0"
	I0612 15:03:43.723363   13752 logs.go:123] Gathering logs for describe nodes ...
	I0612 15:03:43.723363   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0612 15:03:43.918572   13752 command_runner.go:130] > Name:               multinode-025000
	I0612 15:03:43.918624   13752 command_runner.go:130] > Roles:              control-plane
	I0612 15:03:43.918660   13752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0612 15:03:43.918660   13752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0612 15:03:43.918660   13752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0612 15:03:43.918660   13752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-025000
	I0612 15:03:43.918698   13752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0612 15:03:43.918698   13752 command_runner.go:130] >                     minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97
	I0612 15:03:43.918734   13752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-025000
	I0612 15:03:43.918734   13752 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0612 15:03:43.918762   13752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_12T14_39_32_0700
	I0612 15:03:43.918762   13752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0612 15:03:43.918762   13752 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0612 15:03:43.918762   13752 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0612 15:03:43.918762   13752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0612 15:03:43.918762   13752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0612 15:03:43.918762   13752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0612 15:03:43.918762   13752 command_runner.go:130] > CreationTimestamp:  Wed, 12 Jun 2024 21:39:28 +0000
	I0612 15:03:43.918762   13752 command_runner.go:130] > Taints:             <none>
	I0612 15:03:43.918762   13752 command_runner.go:130] > Unschedulable:      false
	I0612 15:03:43.918762   13752 command_runner.go:130] > Lease:
	I0612 15:03:43.918762   13752 command_runner.go:130] >   HolderIdentity:  multinode-025000
	I0612 15:03:43.918762   13752 command_runner.go:130] >   AcquireTime:     <unset>
	I0612 15:03:43.918762   13752 command_runner.go:130] >   RenewTime:       Wed, 12 Jun 2024 22:03:42 +0000
	I0612 15:03:43.918762   13752 command_runner.go:130] > Conditions:
	I0612 15:03:43.918762   13752 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0612 15:03:43.918762   13752 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0612 15:03:43.918762   13752 command_runner.go:130] >   MemoryPressure   False   Wed, 12 Jun 2024 22:03:11 +0000   Wed, 12 Jun 2024 21:39:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0612 15:03:43.918762   13752 command_runner.go:130] >   DiskPressure     False   Wed, 12 Jun 2024 22:03:11 +0000   Wed, 12 Jun 2024 21:39:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0612 15:03:43.918762   13752 command_runner.go:130] >   PIDPressure      False   Wed, 12 Jun 2024 22:03:11 +0000   Wed, 12 Jun 2024 21:39:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0612 15:03:43.918762   13752 command_runner.go:130] >   Ready            True    Wed, 12 Jun 2024 22:03:11 +0000   Wed, 12 Jun 2024 22:03:11 +0000   KubeletReady                 kubelet is posting ready status
	I0612 15:03:43.918762   13752 command_runner.go:130] > Addresses:
	I0612 15:03:43.918762   13752 command_runner.go:130] >   InternalIP:  172.23.200.184
	I0612 15:03:43.918762   13752 command_runner.go:130] >   Hostname:    multinode-025000
	I0612 15:03:43.918762   13752 command_runner.go:130] > Capacity:
	I0612 15:03:43.918762   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:43.918762   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:43.918762   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:43.918762   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:43.918762   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:43.918762   13752 command_runner.go:130] > Allocatable:
	I0612 15:03:43.918762   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:43.918762   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:43.918762   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:43.918762   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:43.918762   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:43.918762   13752 command_runner.go:130] > System Info:
	I0612 15:03:43.918762   13752 command_runner.go:130] >   Machine ID:                 e65e28dfa5bf4f27a0123e4ae1007793
	I0612 15:03:43.918762   13752 command_runner.go:130] >   System UUID:                3e5a42d3-ea80-0c4d-ad18-4b76e4f3e22f
	I0612 15:03:43.918762   13752 command_runner.go:130] >   Boot ID:                    0efecf43-b070-4a8f-b542-4d1fd07306ad
	I0612 15:03:43.919355   13752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0612 15:03:43.919355   13752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0612 15:03:43.919355   13752 command_runner.go:130] >   Operating System:           linux
	I0612 15:03:43.919355   13752 command_runner.go:130] >   Architecture:               amd64
	I0612 15:03:43.919355   13752 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0612 15:03:43.919403   13752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0612 15:03:43.919403   13752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0612 15:03:43.919403   13752 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0612 15:03:43.919403   13752 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0612 15:03:43.919485   13752 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0612 15:03:43.919485   13752 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0612 15:03:43.919521   13752 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0612 15:03:43.919552   13752 command_runner.go:130] >   default                     busybox-fc5497c4f-45qqd                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0612 15:03:43.919552   13752 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-vgcxw                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     23m
	I0612 15:03:43.919587   13752 command_runner.go:130] >   kube-system                 etcd-multinode-025000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         72s
	I0612 15:03:43.919587   13752 command_runner.go:130] >   kube-system                 kindnet-bqlg8                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	I0612 15:03:43.919639   13752 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-025000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         72s
	I0612 15:03:43.919673   13752 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-025000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I0612 15:03:43.919742   13752 command_runner.go:130] >   kube-system                 kube-proxy-47lr8                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0612 15:03:43.919742   13752 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-025000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I0612 15:03:43.919777   13752 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0612 15:03:43.919777   13752 command_runner.go:130] > Allocated resources:
	I0612 15:03:43.919777   13752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0612 15:03:43.919854   13752 command_runner.go:130] >   Resource           Requests     Limits
	I0612 15:03:43.919873   13752 command_runner.go:130] >   --------           --------     ------
	I0612 15:03:43.919873   13752 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0612 15:03:43.919899   13752 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0612 15:03:43.919899   13752 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0612 15:03:43.919899   13752 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0612 15:03:43.919929   13752 command_runner.go:130] > Events:
	I0612 15:03:43.919929   13752 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0612 15:03:43.919967   13752 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0612 15:03:43.919967   13752 command_runner.go:130] >   Normal  Starting                 23m                kube-proxy       
	I0612 15:03:43.919967   13752 command_runner.go:130] >   Normal  Starting                 70s                kube-proxy       
	I0612 15:03:43.919967   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node multinode-025000 status is now: NodeHasSufficientMemory
	I0612 15:03:43.920025   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node multinode-025000 status is now: NodeHasNoDiskPressure
	I0612 15:03:43.920025   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node multinode-025000 status is now: NodeHasSufficientPID
	I0612 15:03:43.920025   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:43.920086   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-025000 status is now: NodeHasNoDiskPressure
	I0612 15:03:43.920086   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-025000 status is now: NodeHasSufficientMemory
	I0612 15:03:43.920129   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-025000 status is now: NodeHasSufficientPID
	I0612 15:03:43.920129   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:43.920129   13752 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0612 15:03:43.920189   13752 command_runner.go:130] >   Normal  RegisteredNode           23m                node-controller  Node multinode-025000 event: Registered Node multinode-025000 in Controller
	I0612 15:03:43.920189   13752 command_runner.go:130] >   Normal  NodeReady                23m                kubelet          Node multinode-025000 status is now: NodeReady
	I0612 15:03:43.920189   13752 command_runner.go:130] >   Normal  Starting                 78s                kubelet          Starting kubelet.
	I0612 15:03:43.920236   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  78s (x8 over 78s)  kubelet          Node multinode-025000 status is now: NodeHasSufficientMemory
	I0612 15:03:43.920271   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    78s (x8 over 78s)  kubelet          Node multinode-025000 status is now: NodeHasNoDiskPressure
	I0612 15:03:43.920306   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     78s (x7 over 78s)  kubelet          Node multinode-025000 status is now: NodeHasSufficientPID
	I0612 15:03:43.920337   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:43.920337   13752 command_runner.go:130] >   Normal  RegisteredNode           59s                node-controller  Node multinode-025000 event: Registered Node multinode-025000 in Controller
	I0612 15:03:43.920337   13752 command_runner.go:130] > Name:               multinode-025000-m02
	I0612 15:03:43.920337   13752 command_runner.go:130] > Roles:              <none>
	I0612 15:03:43.920337   13752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0612 15:03:43.920337   13752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0612 15:03:43.920337   13752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0612 15:03:43.920337   13752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-025000-m02
	I0612 15:03:43.920337   13752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0612 15:03:43.920337   13752 command_runner.go:130] >                     minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97
	I0612 15:03:43.920337   13752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-025000
	I0612 15:03:43.920337   13752 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0612 15:03:43.920337   13752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_12T14_42_39_0700
	I0612 15:03:43.920337   13752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0612 15:03:43.920337   13752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0612 15:03:43.920337   13752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0612 15:03:43.920337   13752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0612 15:03:43.920337   13752 command_runner.go:130] > CreationTimestamp:  Wed, 12 Jun 2024 21:42:39 +0000
	I0612 15:03:43.920337   13752 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0612 15:03:43.920337   13752 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0612 15:03:43.920337   13752 command_runner.go:130] > Unschedulable:      false
	I0612 15:03:43.920337   13752 command_runner.go:130] > Lease:
	I0612 15:03:43.920337   13752 command_runner.go:130] >   HolderIdentity:  multinode-025000-m02
	I0612 15:03:43.920337   13752 command_runner.go:130] >   AcquireTime:     <unset>
	I0612 15:03:43.920337   13752 command_runner.go:130] >   RenewTime:       Wed, 12 Jun 2024 21:59:20 +0000
	I0612 15:03:43.920337   13752 command_runner.go:130] > Conditions:
	I0612 15:03:43.920337   13752 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0612 15:03:43.920337   13752 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0612 15:03:43.920337   13752 command_runner.go:130] >   MemoryPressure   Unknown   Wed, 12 Jun 2024 21:58:59 +0000   Wed, 12 Jun 2024 22:03:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:43.920337   13752 command_runner.go:130] >   DiskPressure     Unknown   Wed, 12 Jun 2024 21:58:59 +0000   Wed, 12 Jun 2024 22:03:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:43.920337   13752 command_runner.go:130] >   PIDPressure      Unknown   Wed, 12 Jun 2024 21:58:59 +0000   Wed, 12 Jun 2024 22:03:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:43.920337   13752 command_runner.go:130] >   Ready            Unknown   Wed, 12 Jun 2024 21:58:59 +0000   Wed, 12 Jun 2024 22:03:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:43.920337   13752 command_runner.go:130] > Addresses:
	I0612 15:03:43.920337   13752 command_runner.go:130] >   InternalIP:  172.23.196.105
	I0612 15:03:43.920337   13752 command_runner.go:130] >   Hostname:    multinode-025000-m02
	I0612 15:03:43.920337   13752 command_runner.go:130] > Capacity:
	I0612 15:03:43.920337   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:43.920866   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:43.920866   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:43.920866   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:43.920866   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:43.920915   13752 command_runner.go:130] > Allocatable:
	I0612 15:03:43.920915   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:43.920952   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:43.920952   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:43.920952   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:43.920952   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:43.920952   13752 command_runner.go:130] > System Info:
	I0612 15:03:43.920952   13752 command_runner.go:130] >   Machine ID:                 c11d7ff5518449f8bc8169a1fd7b0c4b
	I0612 15:03:43.920952   13752 command_runner.go:130] >   System UUID:                3b021c48-8479-f34c-83c2-77b944a77c5e
	I0612 15:03:43.920952   13752 command_runner.go:130] >   Boot ID:                    67e77c09-c6b2-4c01-b167-2481dd4a7a96
	I0612 15:03:43.920952   13752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0612 15:03:43.921034   13752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0612 15:03:43.921034   13752 command_runner.go:130] >   Operating System:           linux
	I0612 15:03:43.921034   13752 command_runner.go:130] >   Architecture:               amd64
	I0612 15:03:43.921034   13752 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0612 15:03:43.921071   13752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0612 15:03:43.921071   13752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0612 15:03:43.921071   13752 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0612 15:03:43.921107   13752 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0612 15:03:43.921107   13752 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0612 15:03:43.921107   13752 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0612 15:03:43.921167   13752 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0612 15:03:43.921167   13752 command_runner.go:130] >   default                     busybox-fc5497c4f-9bsls    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0612 15:03:43.921167   13752 command_runner.go:130] >   kube-system                 kindnet-v4cqk              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	I0612 15:03:43.921233   13752 command_runner.go:130] >   kube-system                 kube-proxy-tdcdp           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I0612 15:03:43.921233   13752 command_runner.go:130] > Allocated resources:
	I0612 15:03:43.921233   13752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0612 15:03:43.921276   13752 command_runner.go:130] >   Resource           Requests   Limits
	I0612 15:03:43.921276   13752 command_runner.go:130] >   --------           --------   ------
	I0612 15:03:43.921276   13752 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0612 15:03:43.921325   13752 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0612 15:03:43.921325   13752 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0612 15:03:43.921325   13752 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0612 15:03:43.921367   13752 command_runner.go:130] > Events:
	I0612 15:03:43.921367   13752 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0612 15:03:43.921367   13752 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0612 15:03:43.921408   13752 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0612 15:03:43.921408   13752 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-025000-m02 event: Registered Node multinode-025000-m02 in Controller
	I0612 15:03:43.921448   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-025000-m02 status is now: NodeHasSufficientMemory
	I0612 15:03:43.921448   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-025000-m02 status is now: NodeHasNoDiskPressure
	I0612 15:03:43.921448   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-025000-m02 status is now: NodeHasSufficientPID
	I0612 15:03:43.921448   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:43.921448   13752 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-025000-m02 status is now: NodeReady
	I0612 15:03:43.921448   13752 command_runner.go:130] >   Normal  RegisteredNode           59s                node-controller  Node multinode-025000-m02 event: Registered Node multinode-025000-m02 in Controller
	I0612 15:03:43.921448   13752 command_runner.go:130] >   Normal  NodeNotReady             19s                node-controller  Node multinode-025000-m02 status is now: NodeNotReady
	I0612 15:03:43.921448   13752 command_runner.go:130] > Name:               multinode-025000-m03
	I0612 15:03:43.921448   13752 command_runner.go:130] > Roles:              <none>
	I0612 15:03:43.921448   13752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0612 15:03:43.921448   13752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0612 15:03:43.921448   13752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0612 15:03:43.921448   13752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-025000-m03
	I0612 15:03:43.921448   13752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0612 15:03:43.921448   13752 command_runner.go:130] >                     minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97
	I0612 15:03:43.921448   13752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-025000
	I0612 15:03:43.921448   13752 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0612 15:03:43.921448   13752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_12T14_57_59_0700
	I0612 15:03:43.921448   13752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0612 15:03:43.921448   13752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0612 15:03:43.921448   13752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0612 15:03:43.921448   13752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0612 15:03:43.921448   13752 command_runner.go:130] > CreationTimestamp:  Wed, 12 Jun 2024 21:57:58 +0000
	I0612 15:03:43.921448   13752 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0612 15:03:43.921448   13752 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0612 15:03:43.921448   13752 command_runner.go:130] > Unschedulable:      false
	I0612 15:03:43.921448   13752 command_runner.go:130] > Lease:
	I0612 15:03:43.921448   13752 command_runner.go:130] >   HolderIdentity:  multinode-025000-m03
	I0612 15:03:43.921448   13752 command_runner.go:130] >   AcquireTime:     <unset>
	I0612 15:03:43.921448   13752 command_runner.go:130] >   RenewTime:       Wed, 12 Jun 2024 21:59:00 +0000
	I0612 15:03:43.921448   13752 command_runner.go:130] > Conditions:
	I0612 15:03:43.921448   13752 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0612 15:03:43.921448   13752 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0612 15:03:43.921448   13752 command_runner.go:130] >   MemoryPressure   Unknown   Wed, 12 Jun 2024 21:58:06 +0000   Wed, 12 Jun 2024 21:59:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:43.921448   13752 command_runner.go:130] >   DiskPressure     Unknown   Wed, 12 Jun 2024 21:58:06 +0000   Wed, 12 Jun 2024 21:59:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:43.921448   13752 command_runner.go:130] >   PIDPressure      Unknown   Wed, 12 Jun 2024 21:58:06 +0000   Wed, 12 Jun 2024 21:59:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:43.921448   13752 command_runner.go:130] >   Ready            Unknown   Wed, 12 Jun 2024 21:58:06 +0000   Wed, 12 Jun 2024 21:59:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:43.921448   13752 command_runner.go:130] > Addresses:
	I0612 15:03:43.921448   13752 command_runner.go:130] >   InternalIP:  172.23.206.72
	I0612 15:03:43.921448   13752 command_runner.go:130] >   Hostname:    multinode-025000-m03
	I0612 15:03:43.921448   13752 command_runner.go:130] > Capacity:
	I0612 15:03:43.921448   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:43.921448   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:43.921448   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:43.921448   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:43.921448   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:43.921448   13752 command_runner.go:130] > Allocatable:
	I0612 15:03:43.921448   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:43.921448   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:43.921448   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:43.921448   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:43.921448   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:43.921448   13752 command_runner.go:130] > System Info:
	I0612 15:03:43.921448   13752 command_runner.go:130] >   Machine ID:                 b62d5e6740dc42d880d6595ac7dd57ae
	I0612 15:03:43.922029   13752 command_runner.go:130] >   System UUID:                31a13a9b-b7c6-6643-8352-fb322079216a
	I0612 15:03:43.922029   13752 command_runner.go:130] >   Boot ID:                    a21b9eff-2471-4589-9e35-5845aae64358
	I0612 15:03:43.922029   13752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0612 15:03:43.922029   13752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0612 15:03:43.922029   13752 command_runner.go:130] >   Operating System:           linux
	I0612 15:03:43.922029   13752 command_runner.go:130] >   Architecture:               amd64
	I0612 15:03:43.922029   13752 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0612 15:03:43.922029   13752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0612 15:03:43.922029   13752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0612 15:03:43.922139   13752 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0612 15:03:43.922139   13752 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0612 15:03:43.922181   13752 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0612 15:03:43.922181   13752 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0612 15:03:43.922207   13752 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0612 15:03:43.922223   13752 command_runner.go:130] >   kube-system                 kindnet-8252q       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I0612 15:03:43.922223   13752 command_runner.go:130] >   kube-system                 kube-proxy-7jwdg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I0612 15:03:43.922223   13752 command_runner.go:130] > Allocated resources:
	I0612 15:03:43.922223   13752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0612 15:03:43.922223   13752 command_runner.go:130] >   Resource           Requests   Limits
	I0612 15:03:43.922292   13752 command_runner.go:130] >   --------           --------   ------
	I0612 15:03:43.922292   13752 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0612 15:03:43.922292   13752 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0612 15:03:43.922292   13752 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0612 15:03:43.922292   13752 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0612 15:03:43.922292   13752 command_runner.go:130] > Events:
	I0612 15:03:43.922371   13752 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0612 15:03:43.922371   13752 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0612 15:03:43.922371   13752 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0612 15:03:43.922371   13752 command_runner.go:130] >   Normal  Starting                 5m42s                  kube-proxy       
	I0612 15:03:43.922371   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:43.922453   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-025000-m03 status is now: NodeHasSufficientMemory
	I0612 15:03:43.922453   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-025000-m03 status is now: NodeHasNoDiskPressure
	I0612 15:03:43.922453   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-025000-m03 status is now: NodeHasSufficientPID
	I0612 15:03:43.922453   13752 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-025000-m03 status is now: NodeReady
	I0612 15:03:43.922453   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m45s (x2 over 5m45s)  kubelet          Node multinode-025000-m03 status is now: NodeHasSufficientMemory
	I0612 15:03:43.922548   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m45s (x2 over 5m45s)  kubelet          Node multinode-025000-m03 status is now: NodeHasNoDiskPressure
	I0612 15:03:43.922548   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m45s (x2 over 5m45s)  kubelet          Node multinode-025000-m03 status is now: NodeHasSufficientPID
	I0612 15:03:43.922548   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m45s                  kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:43.922625   13752 command_runner.go:130] >   Normal  RegisteredNode           5m44s                  node-controller  Node multinode-025000-m03 event: Registered Node multinode-025000-m03 in Controller
	I0612 15:03:43.922661   13752 command_runner.go:130] >   Normal  NodeReady                5m37s                  kubelet          Node multinode-025000-m03 status is now: NodeReady
	I0612 15:03:43.922661   13752 command_runner.go:130] >   Normal  NodeNotReady             3m58s                  node-controller  Node multinode-025000-m03 status is now: NodeNotReady
	I0612 15:03:43.922661   13752 command_runner.go:130] >   Normal  RegisteredNode           59s                    node-controller  Node multinode-025000-m03 event: Registered Node multinode-025000-m03 in Controller
	I0612 15:03:43.932229   13752 logs.go:123] Gathering logs for kube-scheduler [755750ecd1e3] ...
	I0612 15:03:43.932229   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 755750ecd1e3"
	I0612 15:03:43.942673   13752 command_runner.go:130] ! I0612 22:02:28.771072       1 serving.go:380] Generated self-signed cert in-memory
	I0612 15:03:43.942673   13752 command_runner.go:130] ! W0612 22:02:31.003959       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0612 15:03:43.957875   13752 command_runner.go:130] ! W0612 22:02:31.004072       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:43.957875   13752 command_runner.go:130] ! W0612 22:02:31.004087       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0612 15:03:43.957875   13752 command_runner.go:130] ! W0612 22:02:31.004098       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0612 15:03:43.957875   13752 command_runner.go:130] ! I0612 22:02:31.034273       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0612 15:03:43.957875   13752 command_runner.go:130] ! I0612 22:02:31.034440       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:43.957875   13752 command_runner.go:130] ! I0612 22:02:31.039288       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0612 15:03:43.957875   13752 command_runner.go:130] ! I0612 22:02:31.039331       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0612 15:03:43.957875   13752 command_runner.go:130] ! I0612 22:02:31.039699       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0612 15:03:43.957875   13752 command_runner.go:130] ! I0612 22:02:31.040018       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 15:03:43.957875   13752 command_runner.go:130] ! I0612 22:02:31.139849       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0612 15:03:43.959985   13752 logs.go:123] Gathering logs for kube-proxy [227a905829b0] ...
	I0612 15:03:43.959985   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227a905829b0"
	I0612 15:03:43.983086   13752 command_runner.go:130] ! I0612 22:02:33.538961       1 server_linux.go:69] "Using iptables proxy"
	I0612 15:03:43.983086   13752 command_runner.go:130] ! I0612 22:02:33.585761       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.200.184"]
	I0612 15:03:43.983086   13752 command_runner.go:130] ! I0612 22:02:33.754056       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 15:03:43.983086   13752 command_runner.go:130] ! I0612 22:02:33.754118       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 15:03:43.983086   13752 command_runner.go:130] ! I0612 22:02:33.754141       1 server_linux.go:165] "Using iptables Proxier"
	I0612 15:03:43.983086   13752 command_runner.go:130] ! I0612 22:02:33.765449       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 15:03:43.986095   13752 command_runner.go:130] ! I0612 22:02:33.766192       1 server.go:872] "Version info" version="v1.30.1"
	I0612 15:03:43.986095   13752 command_runner.go:130] ! I0612 22:02:33.766246       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:43.986132   13752 command_runner.go:130] ! I0612 22:02:33.769980       1 config.go:192] "Starting service config controller"
	I0612 15:03:43.986132   13752 command_runner.go:130] ! I0612 22:02:33.770461       1 config.go:101] "Starting endpoint slice config controller"
	I0612 15:03:43.986174   13752 command_runner.go:130] ! I0612 22:02:33.770493       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 15:03:43.986174   13752 command_runner.go:130] ! I0612 22:02:33.770630       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 15:03:43.986174   13752 command_runner.go:130] ! I0612 22:02:33.773852       1 config.go:319] "Starting node config controller"
	I0612 15:03:43.986238   13752 command_runner.go:130] ! I0612 22:02:33.773944       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 15:03:43.986238   13752 command_runner.go:130] ! I0612 22:02:33.870743       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0612 15:03:43.986265   13752 command_runner.go:130] ! I0612 22:02:33.870698       1 shared_informer.go:320] Caches are synced for service config
	I0612 15:03:43.986265   13752 command_runner.go:130] ! I0612 22:02:33.882534       1 shared_informer.go:320] Caches are synced for node config
	I0612 15:03:43.988383   13752 logs.go:123] Gathering logs for kindnet [4d60d82f6bc5] ...
	I0612 15:03:43.988383   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d60d82f6bc5"
	I0612 15:03:44.012769   13752 command_runner.go:130] ! I0612 21:48:53.982546       1 main.go:227] handling current node
	I0612 15:03:44.012769   13752 command_runner.go:130] ! I0612 21:48:53.982561       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.016637   13752 command_runner.go:130] ! I0612 21:48:53.982568       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.016674   13752 command_runner.go:130] ! I0612 21:48:53.982982       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.016722   13752 command_runner.go:130] ! I0612 21:48:53.983049       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.016722   13752 command_runner.go:130] ! I0612 21:49:03.989649       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.016722   13752 command_runner.go:130] ! I0612 21:49:03.989791       1 main.go:227] handling current node
	I0612 15:03:44.016767   13752 command_runner.go:130] ! I0612 21:49:03.989809       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.016973   13752 command_runner.go:130] ! I0612 21:49:03.989817       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.016973   13752 command_runner.go:130] ! I0612 21:49:03.990195       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.016973   13752 command_runner.go:130] ! I0612 21:49:03.990415       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.016973   13752 command_runner.go:130] ! I0612 21:49:14.000384       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.016973   13752 command_runner.go:130] ! I0612 21:49:14.000493       1 main.go:227] handling current node
	I0612 15:03:44.016973   13752 command_runner.go:130] ! I0612 21:49:14.000507       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.016973   13752 command_runner.go:130] ! I0612 21:49:14.000513       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.017720   13752 command_runner.go:130] ! I0612 21:49:14.000627       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.017720   13752 command_runner.go:130] ! I0612 21:49:14.000640       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.017782   13752 command_runner.go:130] ! I0612 21:49:24.006829       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.017782   13752 command_runner.go:130] ! I0612 21:49:24.006871       1 main.go:227] handling current node
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:24.006883       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:24.006889       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:24.007645       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:24.007745       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:34.016679       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:34.016806       1 main.go:227] handling current node
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:34.016838       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:34.016845       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:34.017149       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:34.017279       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:44.025835       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:44.025933       1 main.go:227] handling current node
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:44.025947       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:44.025955       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:44.026381       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:44.026533       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:54.033148       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:54.033257       1 main.go:227] handling current node
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:54.033273       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:54.033281       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:54.033402       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:54.033435       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:50:04.046279       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:50:04.046719       1 main.go:227] handling current node
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:50:04.046832       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:50:04.047109       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.018359   13752 command_runner.go:130] ! I0612 21:50:04.047537       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.018359   13752 command_runner.go:130] ! I0612 21:50:04.047572       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.018359   13752 command_runner.go:130] ! I0612 21:50:14.064171       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.018473   13752 command_runner.go:130] ! I0612 21:50:14.064216       1 main.go:227] handling current node
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:14.064230       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:14.064236       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:14.064574       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:14.064665       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:24.071894       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:24.071935       1 main.go:227] handling current node
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:24.071949       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:24.071955       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:24.072148       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:24.072184       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:34.086428       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:34.086522       1 main.go:227] handling current node
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:34.086536       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:34.086543       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:34.086690       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:34.086707       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:44.093862       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:44.093905       1 main.go:227] handling current node
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:44.093919       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:44.093925       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:44.094840       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:44.094916       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:54.102869       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:54.103074       1 main.go:227] handling current node
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:54.103091       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:54.103100       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:54.103237       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:54.103276       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:51:04.110391       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:51:04.110501       1 main.go:227] handling current node
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:51:04.110517       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:51:04.110556       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:51:04.110721       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:51:04.110794       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:51:14.121126       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:51:14.121263       1 main.go:227] handling current node
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:51:14.121280       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:51:14.121288       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:51:14.121430       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:51:14.121462       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:51:24.131659       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:51:24.131690       1 main.go:227] handling current node
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:51:24.131702       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:51:24.131708       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:51:24.132287       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.019115   13752 command_runner.go:130] ! I0612 21:51:24.132319       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.019115   13752 command_runner.go:130] ! I0612 21:51:34.139419       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.019115   13752 command_runner.go:130] ! I0612 21:51:34.139546       1 main.go:227] handling current node
	I0612 15:03:44.019165   13752 command_runner.go:130] ! I0612 21:51:34.139561       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.019165   13752 command_runner.go:130] ! I0612 21:51:34.139570       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.019165   13752 command_runner.go:130] ! I0612 21:51:34.140149       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.019165   13752 command_runner.go:130] ! I0612 21:51:34.140253       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.019217   13752 command_runner.go:130] ! I0612 21:51:44.152295       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.019217   13752 command_runner.go:130] ! I0612 21:51:44.152430       1 main.go:227] handling current node
	I0612 15:03:44.019217   13752 command_runner.go:130] ! I0612 21:51:44.152464       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.019257   13752 command_runner.go:130] ! I0612 21:51:44.152471       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.019257   13752 command_runner.go:130] ! I0612 21:51:44.153262       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.019257   13752 command_runner.go:130] ! I0612 21:51:44.153471       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.019257   13752 command_runner.go:130] ! I0612 21:51:54.160684       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.019307   13752 command_runner.go:130] ! I0612 21:51:54.160938       1 main.go:227] handling current node
	I0612 15:03:44.019307   13752 command_runner.go:130] ! I0612 21:51:54.160953       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.019347   13752 command_runner.go:130] ! I0612 21:51:54.160960       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.019347   13752 command_runner.go:130] ! I0612 21:51:54.161457       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.019388   13752 command_runner.go:130] ! I0612 21:51:54.161482       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.019388   13752 command_runner.go:130] ! I0612 21:52:04.170421       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.019428   13752 command_runner.go:130] ! I0612 21:52:04.170526       1 main.go:227] handling current node
	I0612 15:03:44.019428   13752 command_runner.go:130] ! I0612 21:52:04.170541       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.019428   13752 command_runner.go:130] ! I0612 21:52:04.170548       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.019428   13752 command_runner.go:130] ! I0612 21:52:04.171076       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.019428   13752 command_runner.go:130] ! I0612 21:52:04.171113       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.019428   13752 command_runner.go:130] ! I0612 21:52:14.180403       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.019486   13752 command_runner.go:130] ! I0612 21:52:14.180490       1 main.go:227] handling current node
	I0612 15:03:44.019486   13752 command_runner.go:130] ! I0612 21:52:14.180508       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.019486   13752 command_runner.go:130] ! I0612 21:52:14.180516       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.019486   13752 command_runner.go:130] ! I0612 21:52:14.180994       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.019535   13752 command_runner.go:130] ! I0612 21:52:14.181032       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.019535   13752 command_runner.go:130] ! I0612 21:52:24.195314       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.019535   13752 command_runner.go:130] ! I0612 21:52:24.195545       1 main.go:227] handling current node
	I0612 15:03:44.019535   13752 command_runner.go:130] ! I0612 21:52:24.195735       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.019535   13752 command_runner.go:130] ! I0612 21:52:24.195807       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.019666   13752 command_runner.go:130] ! I0612 21:52:24.196026       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.019666   13752 command_runner.go:130] ! I0612 21:52:24.196064       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.019735   13752 command_runner.go:130] ! I0612 21:52:34.202013       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.019735   13752 command_runner.go:130] ! I0612 21:52:34.202806       1 main.go:227] handling current node
	I0612 15:03:44.019807   13752 command_runner.go:130] ! I0612 21:52:34.202932       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.019807   13752 command_runner.go:130] ! I0612 21:52:34.203029       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.019867   13752 command_runner.go:130] ! I0612 21:52:34.203265       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.019942   13752 command_runner.go:130] ! I0612 21:52:34.203299       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.019942   13752 command_runner.go:130] ! I0612 21:52:44.209271       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.020009   13752 command_runner.go:130] ! I0612 21:52:44.209440       1 main.go:227] handling current node
	I0612 15:03:44.020009   13752 command_runner.go:130] ! I0612 21:52:44.209476       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.020071   13752 command_runner.go:130] ! I0612 21:52:44.209546       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.020071   13752 command_runner.go:130] ! I0612 21:52:44.209839       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.020071   13752 command_runner.go:130] ! I0612 21:52:44.210283       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.020071   13752 command_runner.go:130] ! I0612 21:52:54.223351       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.020071   13752 command_runner.go:130] ! I0612 21:52:54.223443       1 main.go:227] handling current node
	I0612 15:03:44.020139   13752 command_runner.go:130] ! I0612 21:52:54.223459       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.020139   13752 command_runner.go:130] ! I0612 21:52:54.223466       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.020139   13752 command_runner.go:130] ! I0612 21:52:54.223810       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.020139   13752 command_runner.go:130] ! I0612 21:52:54.223840       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.020139   13752 command_runner.go:130] ! I0612 21:53:04.236876       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.020139   13752 command_runner.go:130] ! I0612 21:53:04.237155       1 main.go:227] handling current node
	I0612 15:03:44.020221   13752 command_runner.go:130] ! I0612 21:53:04.237949       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.020221   13752 command_runner.go:130] ! I0612 21:53:04.238341       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.020221   13752 command_runner.go:130] ! I0612 21:53:04.238673       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.020221   13752 command_runner.go:130] ! I0612 21:53:04.238707       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.020293   13752 command_runner.go:130] ! I0612 21:53:14.245069       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.020293   13752 command_runner.go:130] ! I0612 21:53:14.245110       1 main.go:227] handling current node
	I0612 15:03:44.020293   13752 command_runner.go:130] ! I0612 21:53:14.245122       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.020293   13752 command_runner.go:130] ! I0612 21:53:14.245131       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.020355   13752 command_runner.go:130] ! I0612 21:53:14.245834       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.020355   13752 command_runner.go:130] ! I0612 21:53:14.245932       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.020355   13752 command_runner.go:130] ! I0612 21:53:24.258923       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.020355   13752 command_runner.go:130] ! I0612 21:53:24.258965       1 main.go:227] handling current node
	I0612 15:03:44.020355   13752 command_runner.go:130] ! I0612 21:53:24.258977       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.020355   13752 command_runner.go:130] ! I0612 21:53:24.258983       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.020415   13752 command_runner.go:130] ! I0612 21:53:24.259367       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.020415   13752 command_runner.go:130] ! I0612 21:53:24.259399       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.020415   13752 command_runner.go:130] ! I0612 21:53:34.265573       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.020415   13752 command_runner.go:130] ! I0612 21:53:34.265738       1 main.go:227] handling current node
	I0612 15:03:44.020415   13752 command_runner.go:130] ! I0612 21:53:34.265787       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.020415   13752 command_runner.go:130] ! I0612 21:53:34.265797       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.020415   13752 command_runner.go:130] ! I0612 21:53:34.266180       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.020486   13752 command_runner.go:130] ! I0612 21:53:34.266257       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.020486   13752 command_runner.go:130] ! I0612 21:53:44.278968       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.020486   13752 command_runner.go:130] ! I0612 21:53:44.279173       1 main.go:227] handling current node
	I0612 15:03:44.020536   13752 command_runner.go:130] ! I0612 21:53:44.279207       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.020536   13752 command_runner.go:130] ! I0612 21:53:44.279294       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.020536   13752 command_runner.go:130] ! I0612 21:53:44.279698       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.020536   13752 command_runner.go:130] ! I0612 21:53:44.279829       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.020536   13752 command_runner.go:130] ! I0612 21:53:54.290366       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.020536   13752 command_runner.go:130] ! I0612 21:53:54.290472       1 main.go:227] handling current node
	I0612 15:03:44.020597   13752 command_runner.go:130] ! I0612 21:53:54.290487       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.020597   13752 command_runner.go:130] ! I0612 21:53:54.290494       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.020597   13752 command_runner.go:130] ! I0612 21:53:54.291158       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.020597   13752 command_runner.go:130] ! I0612 21:53:54.291263       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.020660   13752 command_runner.go:130] ! I0612 21:54:04.308014       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.020660   13752 command_runner.go:130] ! I0612 21:54:04.308117       1 main.go:227] handling current node
	I0612 15:03:44.020660   13752 command_runner.go:130] ! I0612 21:54:04.308133       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.020660   13752 command_runner.go:130] ! I0612 21:54:04.308142       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.020736   13752 command_runner.go:130] ! I0612 21:54:04.308605       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.020736   13752 command_runner.go:130] ! I0612 21:54:04.308643       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.020736   13752 command_runner.go:130] ! I0612 21:54:14.316271       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.020736   13752 command_runner.go:130] ! I0612 21:54:14.316380       1 main.go:227] handling current node
	I0612 15:03:44.020736   13752 command_runner.go:130] ! I0612 21:54:14.316396       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.020736   13752 command_runner.go:130] ! I0612 21:54:14.316403       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.020791   13752 command_runner.go:130] ! I0612 21:54:14.316942       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.020791   13752 command_runner.go:130] ! I0612 21:54:14.316959       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.020791   13752 command_runner.go:130] ! I0612 21:54:24.330853       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.020791   13752 command_runner.go:130] ! I0612 21:54:24.331009       1 main.go:227] handling current node
	I0612 15:03:44.020791   13752 command_runner.go:130] ! I0612 21:54:24.331025       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.020852   13752 command_runner.go:130] ! I0612 21:54:24.331033       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.020852   13752 command_runner.go:130] ! I0612 21:54:24.331178       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.020852   13752 command_runner.go:130] ! I0612 21:54:24.331213       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.020852   13752 command_runner.go:130] ! I0612 21:54:34.340396       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.020852   13752 command_runner.go:130] ! I0612 21:54:34.340543       1 main.go:227] handling current node
	I0612 15:03:44.020852   13752 command_runner.go:130] ! I0612 21:54:34.340558       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.020910   13752 command_runner.go:130] ! I0612 21:54:34.340565       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.020910   13752 command_runner.go:130] ! I0612 21:54:34.340924       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.020910   13752 command_runner.go:130] ! I0612 21:54:34.341013       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.020910   13752 command_runner.go:130] ! I0612 21:54:44.347468       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.020910   13752 command_runner.go:130] ! I0612 21:54:44.347599       1 main.go:227] handling current node
	I0612 15:03:44.020971   13752 command_runner.go:130] ! I0612 21:54:44.347614       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.020971   13752 command_runner.go:130] ! I0612 21:54:44.347622       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.020971   13752 command_runner.go:130] ! I0612 21:54:44.348279       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.020971   13752 command_runner.go:130] ! I0612 21:54:44.348396       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.020971   13752 command_runner.go:130] ! I0612 21:54:54.364900       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.021034   13752 command_runner.go:130] ! I0612 21:54:54.365031       1 main.go:227] handling current node
	I0612 15:03:44.021034   13752 command_runner.go:130] ! I0612 21:54:54.365046       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.021034   13752 command_runner.go:130] ! I0612 21:54:54.365054       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.021034   13752 command_runner.go:130] ! I0612 21:54:54.365542       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.021034   13752 command_runner.go:130] ! I0612 21:54:54.365727       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.021096   13752 command_runner.go:130] ! I0612 21:55:04.381041       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.021096   13752 command_runner.go:130] ! I0612 21:55:04.381087       1 main.go:227] handling current node
	I0612 15:03:44.021096   13752 command_runner.go:130] ! I0612 21:55:04.381103       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.021096   13752 command_runner.go:130] ! I0612 21:55:04.381110       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.021096   13752 command_runner.go:130] ! I0612 21:55:04.381700       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.021161   13752 command_runner.go:130] ! I0612 21:55:04.381853       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.021161   13752 command_runner.go:130] ! I0612 21:55:14.395619       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.021161   13752 command_runner.go:130] ! I0612 21:55:14.395666       1 main.go:227] handling current node
	I0612 15:03:44.021161   13752 command_runner.go:130] ! I0612 21:55:14.395679       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.021161   13752 command_runner.go:130] ! I0612 21:55:14.395686       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.021223   13752 command_runner.go:130] ! I0612 21:55:14.396514       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.021223   13752 command_runner.go:130] ! I0612 21:55:14.396536       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.021223   13752 command_runner.go:130] ! I0612 21:55:24.411927       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.021223   13752 command_runner.go:130] ! I0612 21:55:24.412012       1 main.go:227] handling current node
	I0612 15:03:44.021223   13752 command_runner.go:130] ! I0612 21:55:24.412028       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.021296   13752 command_runner.go:130] ! I0612 21:55:24.412036       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.021296   13752 command_runner.go:130] ! I0612 21:55:24.412568       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.021296   13752 command_runner.go:130] ! I0612 21:55:24.412661       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.021296   13752 command_runner.go:130] ! I0612 21:55:34.420011       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.021296   13752 command_runner.go:130] ! I0612 21:55:34.420100       1 main.go:227] handling current node
	I0612 15:03:44.021296   13752 command_runner.go:130] ! I0612 21:55:34.420115       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.021382   13752 command_runner.go:130] ! I0612 21:55:34.420122       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.021382   13752 command_runner.go:130] ! I0612 21:55:34.420481       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.021382   13752 command_runner.go:130] ! I0612 21:55:34.420570       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.021382   13752 command_runner.go:130] ! I0612 21:55:44.432502       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.021441   13752 command_runner.go:130] ! I0612 21:55:44.432604       1 main.go:227] handling current node
	I0612 15:03:44.021441   13752 command_runner.go:130] ! I0612 21:55:44.432620       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.021441   13752 command_runner.go:130] ! I0612 21:55:44.432632       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.021441   13752 command_runner.go:130] ! I0612 21:55:44.432881       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.021502   13752 command_runner.go:130] ! I0612 21:55:44.433061       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.021502   13752 command_runner.go:130] ! I0612 21:55:54.446991       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.021502   13752 command_runner.go:130] ! I0612 21:55:54.447440       1 main.go:227] handling current node
	I0612 15:03:44.021502   13752 command_runner.go:130] ! I0612 21:55:54.447622       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.021502   13752 command_runner.go:130] ! I0612 21:55:54.447655       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.021502   13752 command_runner.go:130] ! I0612 21:55:54.447830       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.021565   13752 command_runner.go:130] ! I0612 21:55:54.447901       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.021565   13752 command_runner.go:130] ! I0612 21:56:04.463393       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.021565   13752 command_runner.go:130] ! I0612 21:56:04.463546       1 main.go:227] handling current node
	I0612 15:03:44.021565   13752 command_runner.go:130] ! I0612 21:56:04.463575       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.021565   13752 command_runner.go:130] ! I0612 21:56:04.463596       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.021628   13752 command_runner.go:130] ! I0612 21:56:04.463900       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.021628   13752 command_runner.go:130] ! I0612 21:56:04.463932       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.021628   13752 command_runner.go:130] ! I0612 21:56:14.477690       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.021628   13752 command_runner.go:130] ! I0612 21:56:14.477837       1 main.go:227] handling current node
	I0612 15:03:44.021670   13752 command_runner.go:130] ! I0612 21:56:14.477852       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.021670   13752 command_runner.go:130] ! I0612 21:56:14.477860       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.021670   13752 command_runner.go:130] ! I0612 21:56:14.478029       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.021732   13752 command_runner.go:130] ! I0612 21:56:14.478096       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.021732   13752 command_runner.go:130] ! I0612 21:56:24.485525       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.021732   13752 command_runner.go:130] ! I0612 21:56:24.485620       1 main.go:227] handling current node
	I0612 15:03:44.021732   13752 command_runner.go:130] ! I0612 21:56:24.485655       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.021732   13752 command_runner.go:130] ! I0612 21:56:24.485663       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.021732   13752 command_runner.go:130] ! I0612 21:56:24.486202       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.021837   13752 command_runner.go:130] ! I0612 21:56:24.486237       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.021837   13752 command_runner.go:130] ! I0612 21:56:34.502904       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.021837   13752 command_runner.go:130] ! I0612 21:56:34.502951       1 main.go:227] handling current node
	I0612 15:03:44.021837   13752 command_runner.go:130] ! I0612 21:56:34.502964       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.021837   13752 command_runner.go:130] ! I0612 21:56:34.502970       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.021899   13752 command_runner.go:130] ! I0612 21:56:34.503088       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.021899   13752 command_runner.go:130] ! I0612 21:56:34.503684       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.021899   13752 command_runner.go:130] ! I0612 21:56:44.512292       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.021899   13752 command_runner.go:130] ! I0612 21:56:44.512356       1 main.go:227] handling current node
	I0612 15:03:44.021899   13752 command_runner.go:130] ! I0612 21:56:44.512368       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.021964   13752 command_runner.go:130] ! I0612 21:56:44.512374       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.021964   13752 command_runner.go:130] ! I0612 21:56:44.512909       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.021964   13752 command_runner.go:130] ! I0612 21:56:44.513033       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.021964   13752 command_runner.go:130] ! I0612 21:56:54.520903       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.021964   13752 command_runner.go:130] ! I0612 21:56:54.521017       1 main.go:227] handling current node
	I0612 15:03:44.021964   13752 command_runner.go:130] ! I0612 21:56:54.521034       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.021964   13752 command_runner.go:130] ! I0612 21:56:54.521041       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.022037   13752 command_runner.go:130] ! I0612 21:56:54.521441       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.022037   13752 command_runner.go:130] ! I0612 21:56:54.521665       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.022037   13752 command_runner.go:130] ! I0612 21:57:04.535531       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.022037   13752 command_runner.go:130] ! I0612 21:57:04.535625       1 main.go:227] handling current node
	I0612 15:03:44.022094   13752 command_runner.go:130] ! I0612 21:57:04.535665       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.022094   13752 command_runner.go:130] ! I0612 21:57:04.535672       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.022094   13752 command_runner.go:130] ! I0612 21:57:04.536272       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.022094   13752 command_runner.go:130] ! I0612 21:57:04.536355       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.022094   13752 command_runner.go:130] ! I0612 21:57:14.559304       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.022094   13752 command_runner.go:130] ! I0612 21:57:14.559354       1 main.go:227] handling current node
	I0612 15:03:44.022094   13752 command_runner.go:130] ! I0612 21:57:14.559375       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.022181   13752 command_runner.go:130] ! I0612 21:57:14.559382       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.022181   13752 command_runner.go:130] ! I0612 21:57:14.559735       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.022181   13752 command_runner.go:130] ! I0612 21:57:14.560332       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.022262   13752 command_runner.go:130] ! I0612 21:57:24.568057       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.022262   13752 command_runner.go:130] ! I0612 21:57:24.568103       1 main.go:227] handling current node
	I0612 15:03:44.022262   13752 command_runner.go:130] ! I0612 21:57:24.568116       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.022262   13752 command_runner.go:130] ! I0612 21:57:24.568122       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.022262   13752 command_runner.go:130] ! I0612 21:57:24.568938       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.022262   13752 command_runner.go:130] ! I0612 21:57:24.569042       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.022331   13752 command_runner.go:130] ! I0612 21:57:34.584121       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.022331   13752 command_runner.go:130] ! I0612 21:57:34.584277       1 main.go:227] handling current node
	I0612 15:03:44.022384   13752 command_runner.go:130] ! I0612 21:57:34.584502       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.022384   13752 command_runner.go:130] ! I0612 21:57:34.584607       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.022462   13752 command_runner.go:130] ! I0612 21:57:34.584995       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.022562   13752 command_runner.go:130] ! I0612 21:57:34.585095       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.022562   13752 command_runner.go:130] ! I0612 21:57:44.600201       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.022562   13752 command_runner.go:130] ! I0612 21:57:44.600339       1 main.go:227] handling current node
	I0612 15:03:44.022562   13752 command_runner.go:130] ! I0612 21:57:44.600353       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.022562   13752 command_runner.go:130] ! I0612 21:57:44.600361       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.022619   13752 command_runner.go:130] ! I0612 21:57:44.600842       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.022680   13752 command_runner.go:130] ! I0612 21:57:44.600859       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.022680   13752 command_runner.go:130] ! I0612 21:57:54.615436       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.022680   13752 command_runner.go:130] ! I0612 21:57:54.615497       1 main.go:227] handling current node
	I0612 15:03:44.022680   13752 command_runner.go:130] ! I0612 21:57:54.615511       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.022680   13752 command_runner.go:130] ! I0612 21:57:54.615536       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.022734   13752 command_runner.go:130] ! I0612 21:58:04.629487       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.022734   13752 command_runner.go:130] ! I0612 21:58:04.629657       1 main.go:227] handling current node
	I0612 15:03:44.022734   13752 command_runner.go:130] ! I0612 21:58:04.629797       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.022734   13752 command_runner.go:130] ! I0612 21:58:04.629891       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.022814   13752 command_runner.go:130] ! I0612 21:58:04.630131       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:44.022814   13752 command_runner.go:130] ! I0612 21:58:04.631059       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:44.022814   13752 command_runner.go:130] ! I0612 21:58:04.631221       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.23.206.72 Flags: [] Table: 0} 
	I0612 15:03:44.022814   13752 command_runner.go:130] ! I0612 21:58:14.647500       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.022814   13752 command_runner.go:130] ! I0612 21:58:14.647527       1 main.go:227] handling current node
	I0612 15:03:44.022892   13752 command_runner.go:130] ! I0612 21:58:14.647539       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.022892   13752 command_runner.go:130] ! I0612 21:58:14.647544       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.022892   13752 command_runner.go:130] ! I0612 21:58:14.647661       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:44.022892   13752 command_runner.go:130] ! I0612 21:58:14.647672       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:44.022892   13752 command_runner.go:130] ! I0612 21:58:24.655905       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.022892   13752 command_runner.go:130] ! I0612 21:58:24.656017       1 main.go:227] handling current node
	I0612 15:03:44.022995   13752 command_runner.go:130] ! I0612 21:58:24.656064       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.022995   13752 command_runner.go:130] ! I0612 21:58:24.656140       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.022995   13752 command_runner.go:130] ! I0612 21:58:24.656636       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:44.022995   13752 command_runner.go:130] ! I0612 21:58:24.656721       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:44.022995   13752 command_runner.go:130] ! I0612 21:58:34.670254       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.022995   13752 command_runner.go:130] ! I0612 21:58:34.670590       1 main.go:227] handling current node
	I0612 15:03:44.023071   13752 command_runner.go:130] ! I0612 21:58:34.670966       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.023071   13752 command_runner.go:130] ! I0612 21:58:34.671845       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.023071   13752 command_runner.go:130] ! I0612 21:58:34.672269       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:44.023071   13752 command_runner.go:130] ! I0612 21:58:34.672369       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:44.023127   13752 command_runner.go:130] ! I0612 21:58:44.682684       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.023127   13752 command_runner.go:130] ! I0612 21:58:44.682854       1 main.go:227] handling current node
	I0612 15:03:44.023127   13752 command_runner.go:130] ! I0612 21:58:44.682877       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.023127   13752 command_runner.go:130] ! I0612 21:58:44.682887       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.023127   13752 command_runner.go:130] ! I0612 21:58:44.683737       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:44.023223   13752 command_runner.go:130] ! I0612 21:58:44.683808       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:44.023223   13752 command_runner.go:130] ! I0612 21:58:54.691077       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.023287   13752 command_runner.go:130] ! I0612 21:58:54.691167       1 main.go:227] handling current node
	I0612 15:03:44.023287   13752 command_runner.go:130] ! I0612 21:58:54.691199       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.023287   13752 command_runner.go:130] ! I0612 21:58:54.691207       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.023393   13752 command_runner.go:130] ! I0612 21:58:54.691344       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:44.023393   13752 command_runner.go:130] ! I0612 21:58:54.691357       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:44.023515   13752 command_runner.go:130] ! I0612 21:59:04.700863       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.023515   13752 command_runner.go:130] ! I0612 21:59:04.701017       1 main.go:227] handling current node
	I0612 15:03:44.023515   13752 command_runner.go:130] ! I0612 21:59:04.701032       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.023515   13752 command_runner.go:130] ! I0612 21:59:04.701040       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.023515   13752 command_runner.go:130] ! I0612 21:59:04.701620       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:44.023515   13752 command_runner.go:130] ! I0612 21:59:04.701736       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:44.023515   13752 command_runner.go:130] ! I0612 21:59:14.717668       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.023515   13752 command_runner.go:130] ! I0612 21:59:14.717949       1 main.go:227] handling current node
	I0612 15:03:44.023598   13752 command_runner.go:130] ! I0612 21:59:14.717991       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.023647   13752 command_runner.go:130] ! I0612 21:59:14.718050       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.023751   13752 command_runner.go:130] ! I0612 21:59:14.718200       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:44.023751   13752 command_runner.go:130] ! I0612 21:59:14.718263       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:44.023799   13752 command_runner.go:130] ! I0612 21:59:24.724311       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.023799   13752 command_runner.go:130] ! I0612 21:59:24.724441       1 main.go:227] handling current node
	I0612 15:03:44.023799   13752 command_runner.go:130] ! I0612 21:59:24.724456       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.023799   13752 command_runner.go:130] ! I0612 21:59:24.724464       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.023799   13752 command_runner.go:130] ! I0612 21:59:24.724785       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:44.023799   13752 command_runner.go:130] ! I0612 21:59:24.724853       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:44.023799   13752 command_runner.go:130] ! I0612 21:59:34.737266       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.023888   13752 command_runner.go:130] ! I0612 21:59:34.737410       1 main.go:227] handling current node
	I0612 15:03:44.023888   13752 command_runner.go:130] ! I0612 21:59:34.737425       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.023888   13752 command_runner.go:130] ! I0612 21:59:34.737432       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.023888   13752 command_runner.go:130] ! I0612 21:59:34.738157       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:44.023987   13752 command_runner.go:130] ! I0612 21:59:34.738269       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:44.023987   13752 command_runner.go:130] ! I0612 21:59:44.746123       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.023987   13752 command_runner.go:130] ! I0612 21:59:44.746292       1 main.go:227] handling current node
	I0612 15:03:44.023987   13752 command_runner.go:130] ! I0612 21:59:44.746313       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.023987   13752 command_runner.go:130] ! I0612 21:59:44.746332       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.024070   13752 command_runner.go:130] ! I0612 21:59:44.746856       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:44.024070   13752 command_runner.go:130] ! I0612 21:59:44.746925       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:44.024070   13752 command_runner.go:130] ! I0612 21:59:54.752611       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.024070   13752 command_runner.go:130] ! I0612 21:59:54.752658       1 main.go:227] handling current node
	I0612 15:03:44.024070   13752 command_runner.go:130] ! I0612 21:59:54.752671       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.024070   13752 command_runner.go:130] ! I0612 21:59:54.752678       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.024070   13752 command_runner.go:130] ! I0612 21:59:54.753183       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:44.024070   13752 command_runner.go:130] ! I0612 21:59:54.753277       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:44.040336   13752 logs.go:123] Gathering logs for Docker ...
	I0612 15:03:44.040336   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0612 15:03:44.077287   13752 command_runner.go:130] > Jun 12 22:00:59 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0612 15:03:44.077384   13752 command_runner.go:130] > Jun 12 22:00:59 minikube cri-dockerd[222]: time="2024-06-12T22:00:59Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0612 15:03:44.077384   13752 command_runner.go:130] > Jun 12 22:00:59 minikube cri-dockerd[222]: time="2024-06-12T22:00:59Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0612 15:03:44.077384   13752 command_runner.go:130] > Jun 12 22:00:59 minikube cri-dockerd[222]: time="2024-06-12T22:00:59Z" level=info msg="Start docker client with request timeout 0s"
	I0612 15:03:44.077384   13752 command_runner.go:130] > Jun 12 22:00:59 minikube cri-dockerd[222]: time="2024-06-12T22:00:59Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0612 15:03:44.077384   13752 command_runner.go:130] > Jun 12 22:01:00 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0612 15:03:44.077527   13752 command_runner.go:130] > Jun 12 22:01:00 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0612 15:03:44.077527   13752 command_runner.go:130] > Jun 12 22:01:00 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0612 15:03:44.077527   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0612 15:03:44.077527   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0612 15:03:44.077588   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0612 15:03:44.077588   13752 command_runner.go:130] > Jun 12 22:01:02 minikube cri-dockerd[400]: time="2024-06-12T22:01:02Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0612 15:03:44.077641   13752 command_runner.go:130] > Jun 12 22:01:02 minikube cri-dockerd[400]: time="2024-06-12T22:01:02Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0612 15:03:44.077641   13752 command_runner.go:130] > Jun 12 22:01:02 minikube cri-dockerd[400]: time="2024-06-12T22:01:02Z" level=info msg="Start docker client with request timeout 0s"
	I0612 15:03:44.077641   13752 command_runner.go:130] > Jun 12 22:01:02 minikube cri-dockerd[400]: time="2024-06-12T22:01:02Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0612 15:03:44.077641   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0612 15:03:44.077788   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0612 15:03:44.077788   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0612 15:03:44.077788   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0612 15:03:44.077788   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0612 15:03:44.077788   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0612 15:03:44.077788   13752 command_runner.go:130] > Jun 12 22:01:04 minikube cri-dockerd[420]: time="2024-06-12T22:01:04Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0612 15:03:44.077788   13752 command_runner.go:130] > Jun 12 22:01:04 minikube cri-dockerd[420]: time="2024-06-12T22:01:04Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0612 15:03:44.077916   13752 command_runner.go:130] > Jun 12 22:01:04 minikube cri-dockerd[420]: time="2024-06-12T22:01:04Z" level=info msg="Start docker client with request timeout 0s"
	I0612 15:03:44.077916   13752 command_runner.go:130] > Jun 12 22:01:04 minikube cri-dockerd[420]: time="2024-06-12T22:01:04Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0612 15:03:44.077916   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0612 15:03:44.077916   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0612 15:03:44.077916   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0612 15:03:44.077916   13752 command_runner.go:130] > Jun 12 22:01:07 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0612 15:03:44.077916   13752 command_runner.go:130] > Jun 12 22:01:07 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0612 15:03:44.077916   13752 command_runner.go:130] > Jun 12 22:01:07 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0612 15:03:44.078051   13752 command_runner.go:130] > Jun 12 22:01:07 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0612 15:03:44.078051   13752 command_runner.go:130] > Jun 12 22:01:07 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0612 15:03:44.078051   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 systemd[1]: Starting Docker Application Container Engine...
	I0612 15:03:44.078051   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[647]: time="2024-06-12T22:01:50.903212301Z" level=info msg="Starting up"
	I0612 15:03:44.078051   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[647]: time="2024-06-12T22:01:50.904075211Z" level=info msg="containerd not running, starting managed containerd"
	I0612 15:03:44.078151   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[647]: time="2024-06-12T22:01:50.905013523Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=653
	I0612 15:03:44.078151   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.936715611Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0612 15:03:44.078185   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.960715605Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0612 15:03:44.078185   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.960765806Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0612 15:03:44.078185   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.960836707Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0612 15:03:44.078185   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.961045509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:44.078185   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.961654317Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:44.078295   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.961681417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:44.078295   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.961916220Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:44.078295   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.962126123Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:44.078295   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.962152723Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0612 15:03:44.078295   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.962167223Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:44.078436   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.962695730Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:44.078436   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.963400938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:44.078436   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.966083771Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:44.078436   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.966199872Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:44.078577   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.966330074Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:44.078577   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.966461076Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0612 15:03:44.078577   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.967039883Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0612 15:03:44.078577   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.967257385Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0612 15:03:44.078708   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.967282486Z" level=info msg="metadata content store policy set" policy=shared
	I0612 15:03:44.078708   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974400773Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0612 15:03:44.078708   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974631276Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0612 15:03:44.078708   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974732277Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0612 15:03:44.078708   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974755077Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0612 15:03:44.078708   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974771478Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0612 15:03:44.078829   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974844078Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0612 15:03:44.078829   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975137982Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0612 15:03:44.078829   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975475986Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0612 15:03:44.078829   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975634588Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0612 15:03:44.078917   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975657088Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0612 15:03:44.078917   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975672789Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0612 15:03:44.078917   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975691989Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0612 15:03:44.078986   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975721989Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0612 15:03:44.078986   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975744389Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0612 15:03:44.078986   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975762790Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0612 15:03:44.079074   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975776490Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0612 15:03:44.079074   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975789190Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0612 15:03:44.079074   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975800790Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0612 15:03:44.079074   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975819990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.079074   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975835091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.079163   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975847091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.079163   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975859491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.079163   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975870791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.079247   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975883291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.079247   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975894491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.079247   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975906891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.079247   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975920192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.079334   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975935492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.079334   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975947192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.079334   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975958792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.079433   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975971092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.079433   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975989492Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0612 15:03:44.079433   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976009893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976030193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976044093Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976167595Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976210595Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976227295Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976239996Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976250696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976263096Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976273096Z" level=info msg="NRI interface is disabled by configuration."
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976489199Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976766002Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976819403Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976839003Z" level=info msg="containerd successfully booted in 0.042772s"
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:51 multinode-025000 dockerd[647]: time="2024-06-12T22:01:51.958896661Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.175284022Z" level=info msg="Loading containers: start."
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.600253538Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.679773678Z" level=info msg="Loading containers: done."
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.711890198Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.712661408Z" level=info msg="Daemon has completed initialization"
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.774658419Z" level=info msg="API listen on /var/run/docker.sock"
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.774960723Z" level=info msg="API listen on [::]:2376"
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 systemd[1]: Started Docker Application Container Engine.
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 dockerd[647]: time="2024-06-12T22:02:17.292813222Z" level=info msg="Processing signal 'terminated'"
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 systemd[1]: Stopping Docker Application Container Engine...
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 dockerd[647]: time="2024-06-12T22:02:17.294859626Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 dockerd[647]: time="2024-06-12T22:02:17.295213927Z" level=info msg="Daemon shutdown complete"
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 dockerd[647]: time="2024-06-12T22:02:17.295258527Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 dockerd[647]: time="2024-06-12T22:02:17.295281927Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 systemd[1]: docker.service: Deactivated successfully.
	I0612 15:03:44.080072   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 systemd[1]: Stopped Docker Application Container Engine.
	I0612 15:03:44.080072   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 systemd[1]: Starting Docker Application Container Engine...
	I0612 15:03:44.080072   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:18.376333019Z" level=info msg="Starting up"
	I0612 15:03:44.080072   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:18.377520222Z" level=info msg="containerd not running, starting managed containerd"
	I0612 15:03:44.080072   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:18.378639425Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1050
	I0612 15:03:44.080170   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.412854304Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0612 15:03:44.080170   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437361860Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0612 15:03:44.080217   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437471260Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0612 15:03:44.080266   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437558660Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0612 15:03:44.080266   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437600861Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:44.080266   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437638361Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:44.080362   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437674061Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:44.080400   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437957561Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:44.080447   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.438006462Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:44.080463   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.438028962Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0612 15:03:44.080463   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.438041362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:44.080532   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.438072362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:44.080532   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.438209862Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:44.080532   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441166869Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:44.080619   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441307169Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:44.080619   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441467569Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:44.080619   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441599370Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0612 15:03:44.080703   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441629870Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0612 15:03:44.080703   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441648170Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0612 15:03:44.080703   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441660470Z" level=info msg="metadata content store policy set" policy=shared
	I0612 15:03:44.080785   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442075271Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0612 15:03:44.080785   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442166571Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0612 15:03:44.080785   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442187871Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0612 15:03:44.080785   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442201971Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0612 15:03:44.080870   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442217371Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0612 15:03:44.080870   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442266071Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0612 15:03:44.080870   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442474372Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0612 15:03:44.080870   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442551072Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0612 15:03:44.080953   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442567272Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0612 15:03:44.080953   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442579372Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0612 15:03:44.080953   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442592672Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0612 15:03:44.081037   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442605072Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0612 15:03:44.081037   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442627672Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0612 15:03:44.081037   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442645772Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0612 15:03:44.081037   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442660172Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0612 15:03:44.081037   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442671872Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0612 15:03:44.081037   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442683572Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0612 15:03:44.081037   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442694372Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0612 15:03:44.081037   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442714572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.081037   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442727972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.081248   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442739972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.081248   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442754772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.081248   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442766572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.081248   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442778073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.081335   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442788873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.081335   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442800473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.081335   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442812673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.081442   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442826373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.081442   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442837973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.081442   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442849073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.081442   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442860373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.081522   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442875173Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0612 15:03:44.081522   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442974073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.081522   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442994973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.081608   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443006773Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0612 15:03:44.081608   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443066573Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0612 15:03:44.081608   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443088973Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0612 15:03:44.081608   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443100473Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0612 15:03:44.081689   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443113173Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0612 15:03:44.081689   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443144073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.081762   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443156573Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0612 15:03:44.081762   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443166273Z" level=info msg="NRI interface is disabled by configuration."
	I0612 15:03:44.081851   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443418874Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0612 15:03:44.081851   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443494174Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0612 15:03:44.081851   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443534574Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0612 15:03:44.081851   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443571274Z" level=info msg="containerd successfully booted in 0.033238s"
	I0612 15:03:44.081851   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.419757425Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0612 15:03:44.081851   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.449018892Z" level=info msg="Loading containers: start."
	I0612 15:03:44.081945   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.739331061Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0612 15:03:44.081999   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.815989438Z" level=info msg="Loading containers: done."
	I0612 15:03:44.081999   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.842536299Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0612 15:03:44.082057   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.842674899Z" level=info msg="Daemon has completed initialization"
	I0612 15:03:44.082086   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.885012997Z" level=info msg="API listen on /var/run/docker.sock"
	I0612 15:03:44.082086   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.885608398Z" level=info msg="API listen on [::]:2376"
	I0612 15:03:44.082086   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 systemd[1]: Started Docker Application Container Engine.
	I0612 15:03:44.082086   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0612 15:03:44.082086   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0612 15:03:44.082166   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0612 15:03:44.082166   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Start docker client with request timeout 0s"
	I0612 15:03:44.082166   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0612 15:03:44.082166   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Loaded network plugin cni"
	I0612 15:03:44.082247   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0612 15:03:44.082247   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0612 15:03:44.082247   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0612 15:03:44.082247   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0612 15:03:44.082329   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Start cri-dockerd grpc backend"
	I0612 15:03:44.082329   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0612 15:03:44.082329   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:25Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-vgcxw_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"894c58e9fe752e78b8e86cbbaabc1b6cc78ebcce37e4fc0bf1d838420f80a94d\""
	I0612 15:03:44.082416   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:25Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-45qqd_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"84a9b747663ca262bb35bb462ba83da0c104aee08928bd92a44297ee225d4c27\""
	I0612 15:03:44.082416   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.449365529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.082416   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.449468129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.082507   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.449499429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.082507   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.449616229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.082507   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.464315863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.082588   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.464397563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.082588   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.464444563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.082588   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.464765264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.082676   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.578440826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.082676   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.581064832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.082676   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.582145135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.082758   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.582532135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.082758   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.617373216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.082758   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.617486816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.082838   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.617504016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.082838   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.617593816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.082838   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/da184577f0371664d0a472b38bbfcfd866178308bf69eaabdaefb47d30a7057a/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:44.082919   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a228f6c30fdf44f53a40ac14a2a8b995155f743739957ac413c700924fc873ed/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:44.082919   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/20cbfb3fb853177b89366d165b6a1f67628b2c429266b77034ee6d1ca68b7bac/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:44.082919   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/76517193a960ab9d78db3449c72d4b8285bbf321f947b06f8088487d36423fd7/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:44.082998   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.094370315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.083048   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.094456516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.094499716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.094865116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.162934973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.163009674Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.163029074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.163177074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.167659984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.170028290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.170289390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.171053192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.233482736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.233861237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.234167138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.234578639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:31Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.197280978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.198144480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.198158780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.198341381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.213822116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.213977717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.214060117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.214298317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083656   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.234135963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.083656   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.234182263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.083656   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.234192563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083656   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.234264863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083656   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/435c56b0fbbbb46e4b392ac6467c2054ce16271a6b3dad2d53f747f839b4b3cd/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:44.083656   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5287b61207e62a3ec16408b08af503462a8bed945d441422fd0b733e752d6217/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:44.083656   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.564394224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.083656   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.564548725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.083850   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.564602325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.565056126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.630517377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.630663477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.630850678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.635052387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a20975d81b350d77bb2d9d69d861d19ddbcbab33211643f61e2aaa0d6dc46a9d/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.972834166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.973545267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.974028469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.974235669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 dockerd[1044]: time="2024-06-12T22:03:03.121297409Z" level=info msg="ignoring event" container=3546a5c00321078fed32a806a318f4e56e89801ea54ea9463adf37f82327b38a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:03.122616734Z" level=info msg="shim disconnected" id=3546a5c00321078fed32a806a318f4e56e89801ea54ea9463adf37f82327b38a namespace=moby
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:03.123474651Z" level=warning msg="cleaning up after shim disconnected" id=3546a5c00321078fed32a806a318f4e56e89801ea54ea9463adf37f82327b38a namespace=moby
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:03.123682355Z" level=info msg="cleaning up dead shim" namespace=moby
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:03:13 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:13.819634342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:03:13 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:13.819751243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:03:13 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:13.819788644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:03:13 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:13.820654753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.004015440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.004176540Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.004193540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.005298945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.006561551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.006633551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.006681251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.084479   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.006796752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.084479   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:03:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/986567ef57643aec05ae5353795c364b380cb0f13c2ba98b1c4e04897e7b2e46/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:44.084479   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:03:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2434f89aefe0079002e81e136580c67ef1dca28bfa3b4c1e950241aea9663d4a/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0612 15:03:44.084479   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.542434894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.084479   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.542705495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.084479   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.542742195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.084686   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.543238997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.084686   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.606926167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.084686   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.606994167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.084686   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.607017268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.084686   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.607410069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.106614   13752 logs.go:123] Gathering logs for dmesg ...
	I0612 15:03:44.106614   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 15:03:44.135691   13752 command_runner.go:130] > [Jun12 22:00] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.131000] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.025099] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.064850] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.023448] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0612 15:03:44.135691   13752 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +5.508165] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +1.342262] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +1.269809] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +7.259362] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0612 15:03:44.135691   13752 command_runner.go:130] > [Jun12 22:01] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.155290] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	I0612 15:03:44.135691   13752 command_runner.go:130] > [Jun12 22:02] systemd-fstab-generator[971]: Ignoring "noauto" option for root device
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.095843] kauditd_printk_skb: 73 callbacks suppressed
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.507476] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.171390] systemd-fstab-generator[1022]: Ignoring "noauto" option for root device
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.210222] systemd-fstab-generator[1036]: Ignoring "noauto" option for root device
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +2.904531] systemd-fstab-generator[1224]: Ignoring "noauto" option for root device
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.189304] systemd-fstab-generator[1237]: Ignoring "noauto" option for root device
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.162041] systemd-fstab-generator[1248]: Ignoring "noauto" option for root device
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.261611] systemd-fstab-generator[1263]: Ignoring "noauto" option for root device
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.815328] systemd-fstab-generator[1374]: Ignoring "noauto" option for root device
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.096217] kauditd_printk_skb: 205 callbacks suppressed
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +3.646175] systemd-fstab-generator[1510]: Ignoring "noauto" option for root device
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +1.441935] kauditd_printk_skb: 54 callbacks suppressed
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +5.624550] kauditd_printk_skb: 20 callbacks suppressed
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +3.644538] systemd-fstab-generator[2322]: Ignoring "noauto" option for root device
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +8.250122] kauditd_printk_skb: 70 callbacks suppressed
	I0612 15:03:44.138662   13752 logs.go:123] Gathering logs for coredns [26e5daf354e3] ...
	I0612 15:03:44.138662   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26e5daf354e3"
	I0612 15:03:44.163879   13752 command_runner.go:130] > .:53
	I0612 15:03:44.164737   13752 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 9f7dc1bade6b5769fb289c890c4bc60268e74645c2ad6eb7d326d3f775fd92cb51f1ac39274894772e6760c31275de0003978af82f0f289ef8d45827e8140e48
	I0612 15:03:44.164737   13752 command_runner.go:130] > CoreDNS-1.11.1
	I0612 15:03:44.164737   13752 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0612 15:03:44.164737   13752 command_runner.go:130] > [INFO] 127.0.0.1:54952 - 9035 "HINFO IN 225709527310201015.7757756956422223857. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.039110892s
	I0612 15:03:44.165079   13752 logs.go:123] Gathering logs for kube-apiserver [bbe2d2e51b5f] ...
	I0612 15:03:44.165114   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe2d2e51b5f"
	I0612 15:03:44.187117   13752 command_runner.go:130] ! I0612 22:02:28.032945       1 options.go:221] external host was not specified, using 172.23.200.184
	I0612 15:03:44.193233   13752 command_runner.go:130] ! I0612 22:02:28.036290       1 server.go:148] Version: v1.30.1
	I0612 15:03:44.193233   13752 command_runner.go:130] ! I0612 22:02:28.036339       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:44.193311   13752 command_runner.go:130] ! I0612 22:02:28.916544       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0612 15:03:44.193311   13752 command_runner.go:130] ! I0612 22:02:28.917947       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0612 15:03:44.193311   13752 command_runner.go:130] ! I0612 22:02:28.921952       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0612 15:03:44.193395   13752 command_runner.go:130] ! I0612 22:02:28.922146       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0612 15:03:44.193395   13752 command_runner.go:130] ! I0612 22:02:28.922426       1 instance.go:299] Using reconciler: lease
	I0612 15:03:44.193395   13752 command_runner.go:130] ! I0612 22:02:29.570201       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0612 15:03:44.193478   13752 command_runner.go:130] ! W0612 22:02:29.570355       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:44.193478   13752 command_runner.go:130] ! I0612 22:02:29.801222       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0612 15:03:44.193576   13752 command_runner.go:130] ! I0612 22:02:29.801702       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0612 15:03:44.193576   13752 command_runner.go:130] ! I0612 22:02:30.046166       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0612 15:03:44.193576   13752 command_runner.go:130] ! I0612 22:02:30.216981       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0612 15:03:44.193576   13752 command_runner.go:130] ! I0612 22:02:30.231997       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0612 15:03:44.194179   13752 command_runner.go:130] ! W0612 22:02:30.232097       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:44.194179   13752 command_runner.go:130] ! W0612 22:02:30.232107       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:44.194179   13752 command_runner.go:130] ! I0612 22:02:30.232792       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0612 15:03:44.194179   13752 command_runner.go:130] ! W0612 22:02:30.232881       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:44.194179   13752 command_runner.go:130] ! I0612 22:02:30.233864       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0612 15:03:44.194179   13752 command_runner.go:130] ! I0612 22:02:30.235099       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0612 15:03:44.194179   13752 command_runner.go:130] ! W0612 22:02:30.235211       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0612 15:03:44.194336   13752 command_runner.go:130] ! W0612 22:02:30.235220       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0612 15:03:44.194336   13752 command_runner.go:130] ! I0612 22:02:30.237278       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0612 15:03:44.194378   13752 command_runner.go:130] ! W0612 22:02:30.237314       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0612 15:03:44.194409   13752 command_runner.go:130] ! I0612 22:02:30.238451       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.238555       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.238564       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! I0612 22:02:30.239199       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.239289       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.239352       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! I0612 22:02:30.239881       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0612 15:03:44.194428   13752 command_runner.go:130] ! I0612 22:02:30.242982       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.243157       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.243324       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! I0612 22:02:30.245920       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.246121       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.246235       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! I0612 22:02:30.249402       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.249562       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! I0612 22:02:30.255420       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.255587       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.255759       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! I0612 22:02:30.257021       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.257206       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.257308       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! I0612 22:02:30.269872       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.270105       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.270312       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! I0612 22:02:30.272005       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0612 15:03:44.194428   13752 command_runner.go:130] ! I0612 22:02:30.273608       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.273714       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.273724       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! I0612 22:02:30.277668       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.277779       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.277789       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0612 15:03:44.195028   13752 command_runner.go:130] ! I0612 22:02:30.280767       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0612 15:03:44.195028   13752 command_runner.go:130] ! W0612 22:02:30.280916       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:44.195091   13752 command_runner.go:130] ! W0612 22:02:30.280928       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:44.195091   13752 command_runner.go:130] ! I0612 22:02:30.281776       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0612 15:03:44.195091   13752 command_runner.go:130] ! W0612 22:02:30.281806       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:44.195091   13752 command_runner.go:130] ! I0612 22:02:30.296752       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0612 15:03:44.195091   13752 command_runner.go:130] ! W0612 22:02:30.296810       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:44.195091   13752 command_runner.go:130] ! I0612 22:02:30.901606       1 secure_serving.go:213] Serving securely on [::]:8443
	I0612 15:03:44.195199   13752 command_runner.go:130] ! I0612 22:02:30.901766       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0612 15:03:44.195199   13752 command_runner.go:130] ! I0612 22:02:30.903281       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0612 15:03:44.195291   13752 command_runner.go:130] ! I0612 22:02:30.903373       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0612 15:03:44.195291   13752 command_runner.go:130] ! I0612 22:02:30.903401       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0612 15:03:44.195338   13752 command_runner.go:130] ! I0612 22:02:30.903987       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0612 15:03:44.195338   13752 command_runner.go:130] ! I0612 22:02:30.904124       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.904843       1 aggregator.go:163] waiting for initial CRD sync...
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.905095       1 controller.go:78] Starting OpenAPI AggregationController
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.906424       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.901780       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.907108       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.907337       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.901790       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.901800       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.909555       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.909699       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.910003       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.911734       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.911846       1 controller.go:116] Starting legacy_token_tracking_controller
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.911861       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.912590       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.912666       1 available_controller.go:423] Starting AvailableConditionController
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.912673       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.913776       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.953613       1 controller.go:139] Starting OpenAPI controller
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.953929       1 controller.go:87] Starting OpenAPI V3 controller
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.954278       1 naming_controller.go:291] Starting NamingConditionController
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.954516       1 establishing_controller.go:76] Starting EstablishingController
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.954966       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.955230       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.955507       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:31.003418       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:31.009966       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:31.010019       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:31.010029       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:31.010400       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:31.011993       1 shared_informer.go:320] Caches are synced for configmaps
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:31.012756       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:31.017182       1 aggregator.go:165] initial CRD sync complete...
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:31.017223       1 autoregister_controller.go:141] Starting autoregister controller
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:31.017231       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:31.017238       1 cache.go:39] Caches are synced for autoregister controller
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:31.018109       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:31.018524       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0612 15:03:44.195954   13752 command_runner.go:130] ! I0612 22:02:31.019519       1 policy_source.go:224] refreshing policies
	I0612 15:03:44.195954   13752 command_runner.go:130] ! I0612 22:02:31.020420       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0612 15:03:44.195954   13752 command_runner.go:130] ! I0612 22:02:31.091331       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0612 15:03:44.196001   13752 command_runner.go:130] ! I0612 22:02:31.909532       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0612 15:03:44.196001   13752 command_runner.go:130] ! W0612 22:02:32.355789       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.23.198.154 172.23.200.184]
	I0612 15:03:44.196001   13752 command_runner.go:130] ! I0612 22:02:32.358485       1 controller.go:615] quota admission added evaluator for: endpoints
	I0612 15:03:44.196001   13752 command_runner.go:130] ! I0612 22:02:32.377254       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0612 15:03:44.196084   13752 command_runner.go:130] ! I0612 22:02:33.727670       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0612 15:03:44.196084   13752 command_runner.go:130] ! I0612 22:02:34.008881       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0612 15:03:44.196118   13752 command_runner.go:130] ! I0612 22:02:34.034607       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0612 15:03:44.196118   13752 command_runner.go:130] ! I0612 22:02:34.157870       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0612 15:03:44.196147   13752 command_runner.go:130] ! I0612 22:02:34.176471       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0612 15:03:44.196147   13752 command_runner.go:130] ! W0612 22:02:52.350035       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.23.200.184]
	I0612 15:03:44.203649   13752 logs.go:123] Gathering logs for kube-proxy [c4842faba751] ...
	I0612 15:03:44.203909   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4842faba751"
	I0612 15:03:44.229657   13752 command_runner.go:130] ! I0612 21:39:47.407607       1 server_linux.go:69] "Using iptables proxy"
	I0612 15:03:44.230143   13752 command_runner.go:130] ! I0612 21:39:47.423801       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.198.154"]
	I0612 15:03:44.230143   13752 command_runner.go:130] ! I0612 21:39:47.480061       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 15:03:44.230143   13752 command_runner.go:130] ! I0612 21:39:47.480182       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 15:03:44.230143   13752 command_runner.go:130] ! I0612 21:39:47.480205       1 server_linux.go:165] "Using iptables Proxier"
	I0612 15:03:44.230143   13752 command_runner.go:130] ! I0612 21:39:47.484521       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 15:03:44.230143   13752 command_runner.go:130] ! I0612 21:39:47.485171       1 server.go:872] "Version info" version="v1.30.1"
	I0612 15:03:44.230143   13752 command_runner.go:130] ! I0612 21:39:47.485535       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:44.230143   13752 command_runner.go:130] ! I0612 21:39:47.488126       1 config.go:192] "Starting service config controller"
	I0612 15:03:44.230143   13752 command_runner.go:130] ! I0612 21:39:47.488162       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 15:03:44.230143   13752 command_runner.go:130] ! I0612 21:39:47.488188       1 config.go:101] "Starting endpoint slice config controller"
	I0612 15:03:44.230143   13752 command_runner.go:130] ! I0612 21:39:47.488197       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 15:03:44.230143   13752 command_runner.go:130] ! I0612 21:39:47.488969       1 config.go:319] "Starting node config controller"
	I0612 15:03:44.230143   13752 command_runner.go:130] ! I0612 21:39:47.489001       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 15:03:44.230143   13752 command_runner.go:130] ! I0612 21:39:47.588500       1 shared_informer.go:320] Caches are synced for service config
	I0612 15:03:44.230143   13752 command_runner.go:130] ! I0612 21:39:47.588641       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0612 15:03:44.230143   13752 command_runner.go:130] ! I0612 21:39:47.589226       1 shared_informer.go:320] Caches are synced for node config
	I0612 15:03:44.232405   13752 logs.go:123] Gathering logs for container status ...
	I0612 15:03:44.232405   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 15:03:44.292648   13752 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0612 15:03:44.294678   13752 command_runner.go:130] > f2a949d407287       8c811b4aec35f                                                                                         8 seconds ago        Running             busybox                   1                   2434f89aefe00       busybox-fc5497c4f-45qqd
	I0612 15:03:44.294678   13752 command_runner.go:130] > 26e5daf354e36       cbb01a7bd410d                                                                                         8 seconds ago        Running             coredns                   1                   986567ef57643       coredns-7db6d8ff4d-vgcxw
	I0612 15:03:44.294678   13752 command_runner.go:130] > 448e057077ddc       6e38f40d628db                                                                                         31 seconds ago       Running             storage-provisioner       2                   5287b61207e62       storage-provisioner
	I0612 15:03:44.294741   13752 command_runner.go:130] > cccfd1e9fef5e       ac1c61439df46                                                                                         About a minute ago   Running             kindnet-cni               1                   a20975d81b350       kindnet-bqlg8
	I0612 15:03:44.294782   13752 command_runner.go:130] > 3546a5c003210       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   5287b61207e62       storage-provisioner
	I0612 15:03:44.294782   13752 command_runner.go:130] > 227a905829b07       747097150317f                                                                                         About a minute ago   Running             kube-proxy                1                   435c56b0fbbbb       kube-proxy-47lr8
	I0612 15:03:44.294834   13752 command_runner.go:130] > 6b61f5f6483d5       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   76517193a960a       etcd-multinode-025000
	I0612 15:03:44.294940   13752 command_runner.go:130] > bbe2d2e51b5f3       91be940803172                                                                                         About a minute ago   Running             kube-apiserver            0                   20cbfb3fb8531       kube-apiserver-multinode-025000
	I0612 15:03:44.294981   13752 command_runner.go:130] > 7acc8ff0a9317       25a1387cdab82                                                                                         About a minute ago   Running             kube-controller-manager   1                   a228f6c30fdf4       kube-controller-manager-multinode-025000
	I0612 15:03:44.295017   13752 command_runner.go:130] > 755750ecd1e39       a52dc94f0a912                                                                                         About a minute ago   Running             kube-scheduler            1                   da184577f0371       kube-scheduler-multinode-025000
	I0612 15:03:44.295068   13752 command_runner.go:130] > bfc0382d49a48       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   84a9b747663ca       busybox-fc5497c4f-45qqd
	I0612 15:03:44.295068   13752 command_runner.go:130] > e83cf4eef49e4       cbb01a7bd410d                                                                                         23 minutes ago       Exited              coredns                   0                   894c58e9fe752       coredns-7db6d8ff4d-vgcxw
	I0612 15:03:44.295154   13752 command_runner.go:130] > 4d60d82f6bc5d       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              23 minutes ago       Exited              kindnet-cni               0                   92f2d5f19e95e       kindnet-bqlg8
	I0612 15:03:44.295191   13752 command_runner.go:130] > c4842faba751e       747097150317f                                                                                         23 minutes ago       Exited              kube-proxy                0                   fad98f611536b       kube-proxy-47lr8
	I0612 15:03:44.295267   13752 command_runner.go:130] > 6b021c195669e       a52dc94f0a912                                                                                         24 minutes ago       Exited              kube-scheduler            0                   d9933fdc9ca72       kube-scheduler-multinode-025000
	I0612 15:03:44.295343   13752 command_runner.go:130] > 685d167da53c9       25a1387cdab82                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   bb4351fab502e       kube-controller-manager-multinode-025000
	I0612 15:03:44.298591   13752 logs.go:123] Gathering logs for etcd [6b61f5f6483d] ...
	I0612 15:03:44.298655   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61f5f6483d"
	I0612 15:03:44.322708   13752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-12T22:02:27.594582Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0612 15:03:44.326186   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.595941Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.23.200.184:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.23.200.184:2380","--initial-cluster=multinode-025000=https://172.23.200.184:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.23.200.184:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.23.200.184:2380","--name=multinode-025000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0612 15:03:44.326186   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.596165Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0612 15:03:44.326343   13752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-12T22:02:27.596271Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0612 15:03:44.326383   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.596356Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.23.200.184:2380"]}
	I0612 15:03:44.326413   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.596492Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0612 15:03:44.326491   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.611167Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.23.200.184:2379"]}
	I0612 15:03:44.326562   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.613093Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-025000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.23.200.184:2380"],"listen-peer-urls":["https://172.23.200.184:2380"],"advertise-client-urls":["https://172.23.200.184:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.23.200.184:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"in
itial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0612 15:03:44.326562   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.643295Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"27.151363ms"}
	I0612 15:03:44.326656   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.674268Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0612 15:03:44.326656   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.702241Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"a7fa2563dcb4b7b8","local-member-id":"b93ef5bd064a9684","commit-index":2039}
	I0612 15:03:44.326742   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.702551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 switched to configuration voters=()"}
	I0612 15:03:44.326742   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.702585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 became follower at term 2"}
	I0612 15:03:44.326742   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.70261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft b93ef5bd064a9684 [peers: [], term: 2, commit: 2039, applied: 0, lastindex: 2039, lastterm: 2]"}
	I0612 15:03:44.326821   13752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-12T22:02:27.719372Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0612 15:03:44.326821   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.724082Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1403}
	I0612 15:03:44.326821   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.735755Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1769}
	I0612 15:03:44.326913   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.743333Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0612 15:03:44.326913   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.753311Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"b93ef5bd064a9684","timeout":"7s"}
	I0612 15:03:44.326913   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.755587Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"b93ef5bd064a9684"}
	I0612 15:03:44.326913   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.755671Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"b93ef5bd064a9684","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0612 15:03:44.326998   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.758078Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0612 15:03:44.326998   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.758939Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0612 15:03:44.326998   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.759011Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0612 15:03:44.327089   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.759115Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0612 15:03:44.327089   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.759495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 switched to configuration voters=(13348376537775904388)"}
	I0612 15:03:44.327089   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.759589Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a7fa2563dcb4b7b8","local-member-id":"b93ef5bd064a9684","added-peer-id":"b93ef5bd064a9684","added-peer-peer-urls":["https://172.23.198.154:2380"]}
	I0612 15:03:44.327089   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.760197Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a7fa2563dcb4b7b8","local-member-id":"b93ef5bd064a9684","cluster-version":"3.5"}
	I0612 15:03:44.327194   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.761198Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0612 15:03:44.327194   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.764395Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0612 15:03:44.327305   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.765492Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b93ef5bd064a9684","initial-advertise-peer-urls":["https://172.23.200.184:2380"],"listen-peer-urls":["https://172.23.200.184:2380"],"advertise-client-urls":["https://172.23.200.184:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.23.200.184:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0612 15:03:44.327381   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.766195Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0612 15:03:44.327381   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.766744Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.23.200.184:2380"}
	I0612 15:03:44.327381   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.767384Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.23.200.184:2380"}
	I0612 15:03:44.327381   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503194Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 is starting a new election at term 2"}
	I0612 15:03:44.327381   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.50332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 became pre-candidate at term 2"}
	I0612 15:03:44.327381   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503351Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 received MsgPreVoteResp from b93ef5bd064a9684 at term 2"}
	I0612 15:03:44.327381   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 became candidate at term 3"}
	I0612 15:03:44.327381   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 received MsgVoteResp from b93ef5bd064a9684 at term 3"}
	I0612 15:03:44.327381   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 became leader at term 3"}
	I0612 15:03:44.327381   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b93ef5bd064a9684 elected leader b93ef5bd064a9684 at term 3"}
	I0612 15:03:44.327381   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.511068Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0612 15:03:44.327381   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.511381Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0612 15:03:44.327381   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.511069Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"b93ef5bd064a9684","local-member-attributes":"{Name:multinode-025000 ClientURLs:[https://172.23.200.184:2379]}","request-path":"/0/members/b93ef5bd064a9684/attributes","cluster-id":"a7fa2563dcb4b7b8","publish-timeout":"7s"}
	I0612 15:03:44.327381   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.512996Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0612 15:03:44.327381   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.513243Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0612 15:03:44.327381   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.514729Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0612 15:03:44.327381   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.515422Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.23.200.184:2379"}
	I0612 15:03:46.838416   13752 api_server.go:253] Checking apiserver healthz at https://172.23.200.184:8443/healthz ...
	I0612 15:03:46.838687   13752 api_server.go:279] https://172.23.200.184:8443/healthz returned 200:
	ok
	I0612 15:03:46.846512   13752 round_trippers.go:463] GET https://172.23.200.184:8443/version
	I0612 15:03:46.846512   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:46.846512   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:46.846512   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:46.846774   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:46.846774   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:46.846774   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:46.846774   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:46.846774   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:46.846774   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:46.846774   13752 round_trippers.go:580]     Content-Length: 263
	I0612 15:03:46.846774   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:46 GMT
	I0612 15:03:46.846774   13752 round_trippers.go:580]     Audit-Id: 8cdbc2a9-51bd-41b7-90d2-8656a07d41d2
	I0612 15:03:46.846774   13752 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0612 15:03:46.846774   13752 api_server.go:141] control plane version: v1.30.1
	I0612 15:03:46.846774   13752 api_server.go:131] duration metric: took 3.6629527s to wait for apiserver health ...
	I0612 15:03:46.846774   13752 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 15:03:46.848941   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0612 15:03:46.878233   13752 command_runner.go:130] > bbe2d2e51b5f
	I0612 15:03:46.878618   13752 logs.go:276] 1 containers: [bbe2d2e51b5f]
	I0612 15:03:46.888520   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0612 15:03:46.912452   13752 command_runner.go:130] > 6b61f5f6483d
	I0612 15:03:46.912542   13752 logs.go:276] 1 containers: [6b61f5f6483d]
	I0612 15:03:46.921572   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0612 15:03:46.945490   13752 command_runner.go:130] > 26e5daf354e3
	I0612 15:03:46.945558   13752 command_runner.go:130] > e83cf4eef49e
	I0612 15:03:46.945592   13752 logs.go:276] 2 containers: [26e5daf354e3 e83cf4eef49e]
	I0612 15:03:46.954457   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0612 15:03:46.976262   13752 command_runner.go:130] > 755750ecd1e3
	I0612 15:03:46.976262   13752 command_runner.go:130] > 6b021c195669
	I0612 15:03:46.976262   13752 logs.go:276] 2 containers: [755750ecd1e3 6b021c195669]
	I0612 15:03:46.985535   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0612 15:03:47.014567   13752 command_runner.go:130] > 227a905829b0
	I0612 15:03:47.015903   13752 command_runner.go:130] > c4842faba751
	I0612 15:03:47.015903   13752 logs.go:276] 2 containers: [227a905829b0 c4842faba751]
	I0612 15:03:47.025860   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0612 15:03:47.051274   13752 command_runner.go:130] > 7acc8ff0a931
	I0612 15:03:47.051274   13752 command_runner.go:130] > 685d167da53c
	I0612 15:03:47.051348   13752 logs.go:276] 2 containers: [7acc8ff0a931 685d167da53c]
	I0612 15:03:47.063646   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0612 15:03:47.092147   13752 command_runner.go:130] > cccfd1e9fef5
	I0612 15:03:47.092147   13752 command_runner.go:130] > 4d60d82f6bc5
	I0612 15:03:47.092147   13752 logs.go:276] 2 containers: [cccfd1e9fef5 4d60d82f6bc5]
	I0612 15:03:47.092147   13752 logs.go:123] Gathering logs for kindnet [4d60d82f6bc5] ...
	I0612 15:03:47.092147   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d60d82f6bc5"
	I0612 15:03:47.120626   13752 command_runner.go:130] ! I0612 21:48:53.982546       1 main.go:227] handling current node
	I0612 15:03:47.120626   13752 command_runner.go:130] ! I0612 21:48:53.982561       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.120626   13752 command_runner.go:130] ! I0612 21:48:53.982568       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.120626   13752 command_runner.go:130] ! I0612 21:48:53.982982       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.120626   13752 command_runner.go:130] ! I0612 21:48:53.983049       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.124269   13752 command_runner.go:130] ! I0612 21:49:03.989649       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.124269   13752 command_runner.go:130] ! I0612 21:49:03.989791       1 main.go:227] handling current node
	I0612 15:03:47.124269   13752 command_runner.go:130] ! I0612 21:49:03.989809       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.125640   13752 command_runner.go:130] ! I0612 21:49:03.989817       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.125640   13752 command_runner.go:130] ! I0612 21:49:03.990195       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.125640   13752 command_runner.go:130] ! I0612 21:49:03.990415       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.125640   13752 command_runner.go:130] ! I0612 21:49:14.000384       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:14.000493       1 main.go:227] handling current node
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:14.000507       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:14.000513       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:14.000627       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:14.000640       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:24.006829       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:24.006871       1 main.go:227] handling current node
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:24.006883       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:24.006889       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:24.007645       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:24.007745       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:34.016679       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:34.016806       1 main.go:227] handling current node
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:34.016838       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:34.016845       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:34.017149       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:34.017279       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:44.025835       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:44.025933       1 main.go:227] handling current node
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:44.025947       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:44.025955       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:44.026381       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:44.026533       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:54.033148       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:54.033257       1 main.go:227] handling current node
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:54.033273       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:54.033281       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:54.033402       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:54.033435       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:50:04.046279       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:50:04.046719       1 main.go:227] handling current node
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:50:04.046832       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:50:04.047109       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:50:04.047537       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:50:04.047572       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:50:14.064171       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:50:14.064216       1 main.go:227] handling current node
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:50:14.064230       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:50:14.064236       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:50:14.064574       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:50:14.064665       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:50:24.071894       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.126283   13752 command_runner.go:130] ! I0612 21:50:24.071935       1 main.go:227] handling current node
	I0612 15:03:47.126283   13752 command_runner.go:130] ! I0612 21:50:24.071949       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.126283   13752 command_runner.go:130] ! I0612 21:50:24.071955       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.126283   13752 command_runner.go:130] ! I0612 21:50:24.072148       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.126283   13752 command_runner.go:130] ! I0612 21:50:24.072184       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.126283   13752 command_runner.go:130] ! I0612 21:50:34.086428       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.126283   13752 command_runner.go:130] ! I0612 21:50:34.086522       1 main.go:227] handling current node
	I0612 15:03:47.126283   13752 command_runner.go:130] ! I0612 21:50:34.086536       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.126384   13752 command_runner.go:130] ! I0612 21:50:34.086543       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.126384   13752 command_runner.go:130] ! I0612 21:50:34.086690       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.126384   13752 command_runner.go:130] ! I0612 21:50:34.086707       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.126384   13752 command_runner.go:130] ! I0612 21:50:44.093862       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.126384   13752 command_runner.go:130] ! I0612 21:50:44.093905       1 main.go:227] handling current node
	I0612 15:03:47.126384   13752 command_runner.go:130] ! I0612 21:50:44.093919       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.126384   13752 command_runner.go:130] ! I0612 21:50:44.093925       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.126456   13752 command_runner.go:130] ! I0612 21:50:44.094840       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.126456   13752 command_runner.go:130] ! I0612 21:50:44.094916       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.126486   13752 command_runner.go:130] ! I0612 21:50:54.102869       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.126486   13752 command_runner.go:130] ! I0612 21:50:54.103074       1 main.go:227] handling current node
	I0612 15:03:47.126486   13752 command_runner.go:130] ! I0612 21:50:54.103091       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.126486   13752 command_runner.go:130] ! I0612 21:50:54.103100       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.126554   13752 command_runner.go:130] ! I0612 21:50:54.103237       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.126554   13752 command_runner.go:130] ! I0612 21:50:54.103276       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.126554   13752 command_runner.go:130] ! I0612 21:51:04.110391       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.126554   13752 command_runner.go:130] ! I0612 21:51:04.110501       1 main.go:227] handling current node
	I0612 15:03:47.126554   13752 command_runner.go:130] ! I0612 21:51:04.110517       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.126626   13752 command_runner.go:130] ! I0612 21:51:04.110556       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.126626   13752 command_runner.go:130] ! I0612 21:51:04.110721       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.126626   13752 command_runner.go:130] ! I0612 21:51:04.110794       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.126626   13752 command_runner.go:130] ! I0612 21:51:14.121126       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.126626   13752 command_runner.go:130] ! I0612 21:51:14.121263       1 main.go:227] handling current node
	I0612 15:03:47.126692   13752 command_runner.go:130] ! I0612 21:51:14.121280       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.126692   13752 command_runner.go:130] ! I0612 21:51:14.121288       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.126692   13752 command_runner.go:130] ! I0612 21:51:14.121430       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.126692   13752 command_runner.go:130] ! I0612 21:51:14.121462       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.126758   13752 command_runner.go:130] ! I0612 21:51:24.131659       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.126758   13752 command_runner.go:130] ! I0612 21:51:24.131690       1 main.go:227] handling current node
	I0612 15:03:47.126758   13752 command_runner.go:130] ! I0612 21:51:24.131702       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.126758   13752 command_runner.go:130] ! I0612 21:51:24.131708       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.126823   13752 command_runner.go:130] ! I0612 21:51:24.132287       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.126880   13752 command_runner.go:130] ! I0612 21:51:24.132319       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.126880   13752 command_runner.go:130] ! I0612 21:51:34.139419       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.126920   13752 command_runner.go:130] ! I0612 21:51:34.139546       1 main.go:227] handling current node
	I0612 15:03:47.126920   13752 command_runner.go:130] ! I0612 21:51:34.139561       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.126920   13752 command_runner.go:130] ! I0612 21:51:34.139570       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.126920   13752 command_runner.go:130] ! I0612 21:51:34.140149       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.126920   13752 command_runner.go:130] ! I0612 21:51:34.140253       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.126920   13752 command_runner.go:130] ! I0612 21:51:44.152295       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.126997   13752 command_runner.go:130] ! I0612 21:51:44.152430       1 main.go:227] handling current node
	I0612 15:03:47.126997   13752 command_runner.go:130] ! I0612 21:51:44.152464       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.126997   13752 command_runner.go:130] ! I0612 21:51:44.152471       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.127062   13752 command_runner.go:130] ! I0612 21:51:44.153262       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.127086   13752 command_runner.go:130] ! I0612 21:51:44.153471       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.127117   13752 command_runner.go:130] ! I0612 21:51:54.160684       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.127117   13752 command_runner.go:130] ! I0612 21:51:54.160938       1 main.go:227] handling current node
	I0612 15:03:47.127117   13752 command_runner.go:130] ! I0612 21:51:54.160953       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.127157   13752 command_runner.go:130] ! I0612 21:51:54.160960       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.127157   13752 command_runner.go:130] ! I0612 21:51:54.161457       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.127157   13752 command_runner.go:130] ! I0612 21:51:54.161482       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.127157   13752 command_runner.go:130] ! I0612 21:52:04.170421       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.127157   13752 command_runner.go:130] ! I0612 21:52:04.170526       1 main.go:227] handling current node
	I0612 15:03:47.127157   13752 command_runner.go:130] ! I0612 21:52:04.170541       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.127272   13752 command_runner.go:130] ! I0612 21:52:04.170548       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.127272   13752 command_runner.go:130] ! I0612 21:52:04.171076       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.127303   13752 command_runner.go:130] ! I0612 21:52:04.171113       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.127341   13752 command_runner.go:130] ! I0612 21:52:14.180403       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.127341   13752 command_runner.go:130] ! I0612 21:52:14.180490       1 main.go:227] handling current node
	I0612 15:03:47.127341   13752 command_runner.go:130] ! I0612 21:52:14.180508       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.127341   13752 command_runner.go:130] ! I0612 21:52:14.180516       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.127341   13752 command_runner.go:130] ! I0612 21:52:14.180994       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.127415   13752 command_runner.go:130] ! I0612 21:52:14.181032       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.127415   13752 command_runner.go:130] ! I0612 21:52:24.195314       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.127439   13752 command_runner.go:130] ! I0612 21:52:24.195545       1 main.go:227] handling current node
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:24.195735       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:24.195807       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:24.196026       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:24.196064       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:34.202013       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:34.202806       1 main.go:227] handling current node
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:34.202932       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:34.203029       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:34.203265       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:34.203299       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:44.209271       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:44.209440       1 main.go:227] handling current node
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:44.209476       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:44.209546       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:44.209839       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:44.210283       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:54.223351       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:54.223443       1 main.go:227] handling current node
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:54.223459       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:54.223466       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:54.223810       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:54.223840       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:53:04.236876       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:53:04.237155       1 main.go:227] handling current node
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:53:04.237949       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:53:04.238341       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:53:04.238673       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:53:04.238707       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:53:14.245069       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:53:14.245110       1 main.go:227] handling current node
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:53:14.245122       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:53:14.245131       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:53:14.245834       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:53:14.245932       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:53:24.258923       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:53:24.258965       1 main.go:227] handling current node
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:53:24.258977       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:53:24.258983       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:53:24.259367       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.128000   13752 command_runner.go:130] ! I0612 21:53:24.259399       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.128000   13752 command_runner.go:130] ! I0612 21:53:34.265573       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.128000   13752 command_runner.go:130] ! I0612 21:53:34.265738       1 main.go:227] handling current node
	I0612 15:03:47.128000   13752 command_runner.go:130] ! I0612 21:53:34.265787       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.128000   13752 command_runner.go:130] ! I0612 21:53:34.265797       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.128097   13752 command_runner.go:130] ! I0612 21:53:34.266180       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.128097   13752 command_runner.go:130] ! I0612 21:53:34.266257       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.128097   13752 command_runner.go:130] ! I0612 21:53:44.278968       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.128097   13752 command_runner.go:130] ! I0612 21:53:44.279173       1 main.go:227] handling current node
	I0612 15:03:47.128158   13752 command_runner.go:130] ! I0612 21:53:44.279207       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.128158   13752 command_runner.go:130] ! I0612 21:53:44.279294       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.128158   13752 command_runner.go:130] ! I0612 21:53:44.279698       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.128158   13752 command_runner.go:130] ! I0612 21:53:44.279829       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.128158   13752 command_runner.go:130] ! I0612 21:53:54.290366       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.128158   13752 command_runner.go:130] ! I0612 21:53:54.290472       1 main.go:227] handling current node
	I0612 15:03:47.128158   13752 command_runner.go:130] ! I0612 21:53:54.290487       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.128248   13752 command_runner.go:130] ! I0612 21:53:54.290494       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.128248   13752 command_runner.go:130] ! I0612 21:53:54.291158       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.128248   13752 command_runner.go:130] ! I0612 21:53:54.291263       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.128248   13752 command_runner.go:130] ! I0612 21:54:04.308014       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.128316   13752 command_runner.go:130] ! I0612 21:54:04.308117       1 main.go:227] handling current node
	I0612 15:03:47.128316   13752 command_runner.go:130] ! I0612 21:54:04.308133       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.128316   13752 command_runner.go:130] ! I0612 21:54:04.308142       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.128388   13752 command_runner.go:130] ! I0612 21:54:04.308605       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.128414   13752 command_runner.go:130] ! I0612 21:54:04.308643       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.128414   13752 command_runner.go:130] ! I0612 21:54:14.316271       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.128446   13752 command_runner.go:130] ! I0612 21:54:14.316380       1 main.go:227] handling current node
	I0612 15:03:47.128473   13752 command_runner.go:130] ! I0612 21:54:14.316396       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.128473   13752 command_runner.go:130] ! I0612 21:54:14.316403       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.128538   13752 command_runner.go:130] ! I0612 21:54:14.316942       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.128603   13752 command_runner.go:130] ! I0612 21:54:14.316959       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.128603   13752 command_runner.go:130] ! I0612 21:54:24.330853       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.128668   13752 command_runner.go:130] ! I0612 21:54:24.331009       1 main.go:227] handling current node
	I0612 15:03:47.128694   13752 command_runner.go:130] ! I0612 21:54:24.331025       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.128720   13752 command_runner.go:130] ! I0612 21:54:24.331033       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.128720   13752 command_runner.go:130] ! I0612 21:54:24.331178       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.128748   13752 command_runner.go:130] ! I0612 21:54:24.331213       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.128783   13752 command_runner.go:130] ! I0612 21:54:34.340396       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.128783   13752 command_runner.go:130] ! I0612 21:54:34.340543       1 main.go:227] handling current node
	I0612 15:03:47.128783   13752 command_runner.go:130] ! I0612 21:54:34.340558       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.128836   13752 command_runner.go:130] ! I0612 21:54:34.340565       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.128885   13752 command_runner.go:130] ! I0612 21:54:34.340924       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.128910   13752 command_runner.go:130] ! I0612 21:54:34.341013       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.128910   13752 command_runner.go:130] ! I0612 21:54:44.347468       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.128910   13752 command_runner.go:130] ! I0612 21:54:44.347599       1 main.go:227] handling current node
	I0612 15:03:47.128974   13752 command_runner.go:130] ! I0612 21:54:44.347614       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.129014   13752 command_runner.go:130] ! I0612 21:54:44.347622       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:54:44.348279       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:54:44.348396       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:54:54.364900       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:54:54.365031       1 main.go:227] handling current node
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:54:54.365046       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:54:54.365054       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:54:54.365542       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:54:54.365727       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:04.381041       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:04.381087       1 main.go:227] handling current node
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:04.381103       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:04.381110       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:04.381700       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:04.381853       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:14.395619       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:14.395666       1 main.go:227] handling current node
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:14.395679       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:14.395686       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:14.396514       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:14.396536       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:24.411927       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:24.412012       1 main.go:227] handling current node
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:24.412028       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:24.412036       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:24.412568       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:24.412661       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:34.420011       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:34.420100       1 main.go:227] handling current node
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:34.420115       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:34.420122       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:34.420481       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:34.420570       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:44.432502       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:44.432604       1 main.go:227] handling current node
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:44.432620       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:44.432632       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.129576   13752 command_runner.go:130] ! I0612 21:55:44.432881       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.129576   13752 command_runner.go:130] ! I0612 21:55:44.433061       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.129576   13752 command_runner.go:130] ! I0612 21:55:54.446991       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.129576   13752 command_runner.go:130] ! I0612 21:55:54.447440       1 main.go:227] handling current node
	I0612 15:03:47.129576   13752 command_runner.go:130] ! I0612 21:55:54.447622       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.129576   13752 command_runner.go:130] ! I0612 21:55:54.447655       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.129758   13752 command_runner.go:130] ! I0612 21:55:54.447830       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.129758   13752 command_runner.go:130] ! I0612 21:55:54.447901       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.129758   13752 command_runner.go:130] ! I0612 21:56:04.463393       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.129758   13752 command_runner.go:130] ! I0612 21:56:04.463546       1 main.go:227] handling current node
	I0612 15:03:47.129758   13752 command_runner.go:130] ! I0612 21:56:04.463575       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.129818   13752 command_runner.go:130] ! I0612 21:56:04.463596       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.129841   13752 command_runner.go:130] ! I0612 21:56:04.463900       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.129841   13752 command_runner.go:130] ! I0612 21:56:04.463932       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:14.477690       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:14.477837       1 main.go:227] handling current node
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:14.477852       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:14.477860       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:14.478029       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:14.478096       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:24.485525       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:24.485620       1 main.go:227] handling current node
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:24.485655       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:24.485663       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:24.486202       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:24.486237       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:34.502904       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:34.502951       1 main.go:227] handling current node
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:34.502964       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:34.502970       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:34.503088       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:34.503684       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:44.512292       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:44.512356       1 main.go:227] handling current node
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:44.512368       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:44.512374       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:44.512909       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:44.513033       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:54.520903       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:54.521017       1 main.go:227] handling current node
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:54.521034       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:54.521041       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:54.521441       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:54.521665       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:57:04.535531       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:57:04.535625       1 main.go:227] handling current node
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:57:04.535665       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:57:04.535672       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:57:04.536272       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:57:04.536355       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:57:14.559304       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:57:14.559354       1 main.go:227] handling current node
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:57:14.559375       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:57:14.559382       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:57:14.559735       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:57:14.560332       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:57:24.568057       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:57:24.568103       1 main.go:227] handling current node
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:57:24.568116       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:57:24.568122       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.130402   13752 command_runner.go:130] ! I0612 21:57:24.568938       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.130402   13752 command_runner.go:130] ! I0612 21:57:24.569042       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.130402   13752 command_runner.go:130] ! I0612 21:57:34.584121       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.130402   13752 command_runner.go:130] ! I0612 21:57:34.584277       1 main.go:227] handling current node
	I0612 15:03:47.130402   13752 command_runner.go:130] ! I0612 21:57:34.584502       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.130402   13752 command_runner.go:130] ! I0612 21:57:34.584607       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.130402   13752 command_runner.go:130] ! I0612 21:57:34.584995       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.130402   13752 command_runner.go:130] ! I0612 21:57:34.585095       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.130402   13752 command_runner.go:130] ! I0612 21:57:44.600201       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.130402   13752 command_runner.go:130] ! I0612 21:57:44.600339       1 main.go:227] handling current node
	I0612 15:03:47.130402   13752 command_runner.go:130] ! I0612 21:57:44.600353       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.130402   13752 command_runner.go:130] ! I0612 21:57:44.600361       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.130557   13752 command_runner.go:130] ! I0612 21:57:44.600842       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.130557   13752 command_runner.go:130] ! I0612 21:57:44.600859       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.130557   13752 command_runner.go:130] ! I0612 21:57:54.615436       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.130557   13752 command_runner.go:130] ! I0612 21:57:54.615497       1 main.go:227] handling current node
	I0612 15:03:47.130557   13752 command_runner.go:130] ! I0612 21:57:54.615511       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.130624   13752 command_runner.go:130] ! I0612 21:57:54.615536       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.130639   13752 command_runner.go:130] ! I0612 21:58:04.629487       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.130639   13752 command_runner.go:130] ! I0612 21:58:04.629657       1 main.go:227] handling current node
	I0612 15:03:47.130639   13752 command_runner.go:130] ! I0612 21:58:04.629797       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.130639   13752 command_runner.go:130] ! I0612 21:58:04.629891       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.130701   13752 command_runner.go:130] ! I0612 21:58:04.630131       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:47.130701   13752 command_runner.go:130] ! I0612 21:58:04.631059       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:47.130994   13752 command_runner.go:130] ! I0612 21:58:04.631221       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.23.206.72 Flags: [] Table: 0} 
	I0612 15:03:47.130994   13752 command_runner.go:130] ! I0612 21:58:14.647500       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.130994   13752 command_runner.go:130] ! I0612 21:58:14.647527       1 main.go:227] handling current node
	I0612 15:03:47.130994   13752 command_runner.go:130] ! I0612 21:58:14.647539       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.131063   13752 command_runner.go:130] ! I0612 21:58:14.647544       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.131063   13752 command_runner.go:130] ! I0612 21:58:14.647661       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:47.131063   13752 command_runner.go:130] ! I0612 21:58:14.647672       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:47.131063   13752 command_runner.go:130] ! I0612 21:58:24.655905       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.131063   13752 command_runner.go:130] ! I0612 21:58:24.656017       1 main.go:227] handling current node
	I0612 15:03:47.131063   13752 command_runner.go:130] ! I0612 21:58:24.656064       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.131153   13752 command_runner.go:130] ! I0612 21:58:24.656140       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.131153   13752 command_runner.go:130] ! I0612 21:58:24.656636       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:47.131153   13752 command_runner.go:130] ! I0612 21:58:24.656721       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:47.131153   13752 command_runner.go:130] ! I0612 21:58:34.670254       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.131153   13752 command_runner.go:130] ! I0612 21:58:34.670590       1 main.go:227] handling current node
	I0612 15:03:47.131153   13752 command_runner.go:130] ! I0612 21:58:34.670966       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.131246   13752 command_runner.go:130] ! I0612 21:58:34.671845       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.131270   13752 command_runner.go:130] ! I0612 21:58:34.672269       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:47.131270   13752 command_runner.go:130] ! I0612 21:58:34.672369       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:47.131270   13752 command_runner.go:130] ! I0612 21:58:44.682684       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:58:44.682854       1 main.go:227] handling current node
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:58:44.682877       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:58:44.682887       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:58:44.683737       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:58:44.683808       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:58:54.691077       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:58:54.691167       1 main.go:227] handling current node
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:58:54.691199       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:58:54.691207       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:58:54.691344       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:58:54.691357       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:04.700863       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:04.701017       1 main.go:227] handling current node
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:04.701032       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:04.701040       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:04.701620       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:04.701736       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:14.717668       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:14.717949       1 main.go:227] handling current node
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:14.717991       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:14.718050       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:14.718200       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:14.718263       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:24.724311       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:24.724441       1 main.go:227] handling current node
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:24.724456       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:24.724464       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:24.724785       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:24.724853       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:34.737266       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:34.737410       1 main.go:227] handling current node
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:34.737425       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:34.737432       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:34.738157       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:34.738269       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:44.746123       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:44.746292       1 main.go:227] handling current node
	I0612 15:03:47.131834   13752 command_runner.go:130] ! I0612 21:59:44.746313       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.131834   13752 command_runner.go:130] ! I0612 21:59:44.746332       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.131834   13752 command_runner.go:130] ! I0612 21:59:44.746856       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:47.131834   13752 command_runner.go:130] ! I0612 21:59:44.746925       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:47.131834   13752 command_runner.go:130] ! I0612 21:59:54.752611       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.131834   13752 command_runner.go:130] ! I0612 21:59:54.752658       1 main.go:227] handling current node
	I0612 15:03:47.131834   13752 command_runner.go:130] ! I0612 21:59:54.752671       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.131834   13752 command_runner.go:130] ! I0612 21:59:54.752678       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.131834   13752 command_runner.go:130] ! I0612 21:59:54.753183       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:47.131834   13752 command_runner.go:130] ! I0612 21:59:54.753277       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:47.150134   13752 logs.go:123] Gathering logs for kube-apiserver [bbe2d2e51b5f] ...
	I0612 15:03:47.150134   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe2d2e51b5f"
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:28.032945       1 options.go:221] external host was not specified, using 172.23.200.184
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:28.036290       1 server.go:148] Version: v1.30.1
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:28.036339       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:28.916544       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:28.917947       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:28.921952       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:28.922146       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:28.922426       1 instance.go:299] Using reconciler: lease
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:29.570201       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0612 15:03:47.178282   13752 command_runner.go:130] ! W0612 22:02:29.570355       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:29.801222       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:29.801702       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:30.046166       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:30.216981       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:30.231997       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0612 15:03:47.178282   13752 command_runner.go:130] ! W0612 22:02:30.232097       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:47.178282   13752 command_runner.go:130] ! W0612 22:02:30.232107       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:30.232792       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0612 15:03:47.178282   13752 command_runner.go:130] ! W0612 22:02:30.232881       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:30.233864       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:30.235099       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0612 15:03:47.178282   13752 command_runner.go:130] ! W0612 22:02:30.235211       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0612 15:03:47.178282   13752 command_runner.go:130] ! W0612 22:02:30.235220       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:30.237278       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0612 15:03:47.179806   13752 command_runner.go:130] ! W0612 22:02:30.237314       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0612 15:03:47.179806   13752 command_runner.go:130] ! I0612 22:02:30.238451       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0612 15:03:47.179806   13752 command_runner.go:130] ! W0612 22:02:30.238555       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:47.179806   13752 command_runner.go:130] ! W0612 22:02:30.238564       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:47.179806   13752 command_runner.go:130] ! I0612 22:02:30.239199       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0612 15:03:47.179806   13752 command_runner.go:130] ! W0612 22:02:30.239289       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:47.179806   13752 command_runner.go:130] ! W0612 22:02:30.239352       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:47.179994   13752 command_runner.go:130] ! I0612 22:02:30.239881       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0612 15:03:47.180018   13752 command_runner.go:130] ! I0612 22:02:30.242982       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0612 15:03:47.180018   13752 command_runner.go:130] ! W0612 22:02:30.243157       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:47.180102   13752 command_runner.go:130] ! W0612 22:02:30.243324       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:47.180102   13752 command_runner.go:130] ! I0612 22:02:30.245920       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0612 15:03:47.180102   13752 command_runner.go:130] ! W0612 22:02:30.246121       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:47.180138   13752 command_runner.go:130] ! W0612 22:02:30.246235       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:47.180138   13752 command_runner.go:130] ! I0612 22:02:30.249402       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0612 15:03:47.180211   13752 command_runner.go:130] ! W0612 22:02:30.249562       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0612 15:03:47.180211   13752 command_runner.go:130] ! I0612 22:02:30.255420       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0612 15:03:47.180211   13752 command_runner.go:130] ! W0612 22:02:30.255587       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:47.180211   13752 command_runner.go:130] ! W0612 22:02:30.255759       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:47.180211   13752 command_runner.go:130] ! I0612 22:02:30.257021       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0612 15:03:47.180211   13752 command_runner.go:130] ! W0612 22:02:30.257206       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:47.180211   13752 command_runner.go:130] ! W0612 22:02:30.257308       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:47.180211   13752 command_runner.go:130] ! I0612 22:02:30.269872       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0612 15:03:47.180211   13752 command_runner.go:130] ! W0612 22:02:30.270105       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:47.180211   13752 command_runner.go:130] ! W0612 22:02:30.270312       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:47.180211   13752 command_runner.go:130] ! I0612 22:02:30.272005       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0612 15:03:47.180211   13752 command_runner.go:130] ! I0612 22:02:30.273608       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0612 15:03:47.180211   13752 command_runner.go:130] ! W0612 22:02:30.273714       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0612 15:03:47.180211   13752 command_runner.go:130] ! W0612 22:02:30.273724       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:47.180211   13752 command_runner.go:130] ! I0612 22:02:30.277668       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0612 15:03:47.180211   13752 command_runner.go:130] ! W0612 22:02:30.277779       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0612 15:03:47.180211   13752 command_runner.go:130] ! W0612 22:02:30.277789       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0612 15:03:47.180211   13752 command_runner.go:130] ! I0612 22:02:30.280767       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0612 15:03:47.180211   13752 command_runner.go:130] ! W0612 22:02:30.280916       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:47.180211   13752 command_runner.go:130] ! W0612 22:02:30.280928       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:47.180211   13752 command_runner.go:130] ! I0612 22:02:30.281776       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0612 15:03:47.180211   13752 command_runner.go:130] ! W0612 22:02:30.281806       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:47.180211   13752 command_runner.go:130] ! I0612 22:02:30.296752       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0612 15:03:47.180211   13752 command_runner.go:130] ! W0612 22:02:30.296810       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:47.180211   13752 command_runner.go:130] ! I0612 22:02:30.901606       1 secure_serving.go:213] Serving securely on [::]:8443
	I0612 15:03:47.180211   13752 command_runner.go:130] ! I0612 22:02:30.901766       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0612 15:03:47.180211   13752 command_runner.go:130] ! I0612 22:02:30.903281       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0612 15:03:47.180211   13752 command_runner.go:130] ! I0612 22:02:30.903373       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0612 15:03:47.180211   13752 command_runner.go:130] ! I0612 22:02:30.903401       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0612 15:03:47.180739   13752 command_runner.go:130] ! I0612 22:02:30.903987       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0612 15:03:47.180739   13752 command_runner.go:130] ! I0612 22:02:30.904124       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0612 15:03:47.180739   13752 command_runner.go:130] ! I0612 22:02:30.904843       1 aggregator.go:163] waiting for initial CRD sync...
	I0612 15:03:47.180739   13752 command_runner.go:130] ! I0612 22:02:30.905095       1 controller.go:78] Starting OpenAPI AggregationController
	I0612 15:03:47.180739   13752 command_runner.go:130] ! I0612 22:02:30.906424       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0612 15:03:47.180739   13752 command_runner.go:130] ! I0612 22:02:30.901780       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0612 15:03:47.180739   13752 command_runner.go:130] ! I0612 22:02:30.907108       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0612 15:03:47.180739   13752 command_runner.go:130] ! I0612 22:02:30.907337       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0612 15:03:47.180922   13752 command_runner.go:130] ! I0612 22:02:30.901790       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0612 15:03:47.180922   13752 command_runner.go:130] ! I0612 22:02:30.901800       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 15:03:47.180922   13752 command_runner.go:130] ! I0612 22:02:30.909555       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0612 15:03:47.180922   13752 command_runner.go:130] ! I0612 22:02:30.909699       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0612 15:03:47.180990   13752 command_runner.go:130] ! I0612 22:02:30.910003       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0612 15:03:47.180990   13752 command_runner.go:130] ! I0612 22:02:30.911734       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0612 15:03:47.181024   13752 command_runner.go:130] ! I0612 22:02:30.911846       1 controller.go:116] Starting legacy_token_tracking_controller
	I0612 15:03:47.181024   13752 command_runner.go:130] ! I0612 22:02:30.911861       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0612 15:03:47.181024   13752 command_runner.go:130] ! I0612 22:02:30.912590       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0612 15:03:47.181067   13752 command_runner.go:130] ! I0612 22:02:30.912666       1 available_controller.go:423] Starting AvailableConditionController
	I0612 15:03:47.181067   13752 command_runner.go:130] ! I0612 22:02:30.912673       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0612 15:03:47.181067   13752 command_runner.go:130] ! I0612 22:02:30.913776       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0612 15:03:47.181144   13752 command_runner.go:130] ! I0612 22:02:30.953613       1 controller.go:139] Starting OpenAPI controller
	I0612 15:03:47.181144   13752 command_runner.go:130] ! I0612 22:02:30.953929       1 controller.go:87] Starting OpenAPI V3 controller
	I0612 15:03:47.181144   13752 command_runner.go:130] ! I0612 22:02:30.954278       1 naming_controller.go:291] Starting NamingConditionController
	I0612 15:03:47.181144   13752 command_runner.go:130] ! I0612 22:02:30.954516       1 establishing_controller.go:76] Starting EstablishingController
	I0612 15:03:47.181206   13752 command_runner.go:130] ! I0612 22:02:30.954966       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0612 15:03:47.181206   13752 command_runner.go:130] ! I0612 22:02:30.955230       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0612 15:03:47.181258   13752 command_runner.go:130] ! I0612 22:02:30.955507       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0612 15:03:47.181258   13752 command_runner.go:130] ! I0612 22:02:31.003418       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0612 15:03:47.181258   13752 command_runner.go:130] ! I0612 22:02:31.009966       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0612 15:03:47.181315   13752 command_runner.go:130] ! I0612 22:02:31.010019       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0612 15:03:47.181315   13752 command_runner.go:130] ! I0612 22:02:31.010029       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0612 15:03:47.181315   13752 command_runner.go:130] ! I0612 22:02:31.010400       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0612 15:03:47.181399   13752 command_runner.go:130] ! I0612 22:02:31.011993       1 shared_informer.go:320] Caches are synced for configmaps
	I0612 15:03:47.181399   13752 command_runner.go:130] ! I0612 22:02:31.012756       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0612 15:03:47.181399   13752 command_runner.go:130] ! I0612 22:02:31.017182       1 aggregator.go:165] initial CRD sync complete...
	I0612 15:03:47.181399   13752 command_runner.go:130] ! I0612 22:02:31.017223       1 autoregister_controller.go:141] Starting autoregister controller
	I0612 15:03:47.181399   13752 command_runner.go:130] ! I0612 22:02:31.017231       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0612 15:03:47.181399   13752 command_runner.go:130] ! I0612 22:02:31.017238       1 cache.go:39] Caches are synced for autoregister controller
	I0612 15:03:47.181476   13752 command_runner.go:130] ! I0612 22:02:31.018109       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0612 15:03:47.181506   13752 command_runner.go:130] ! I0612 22:02:31.018524       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0612 15:03:47.181506   13752 command_runner.go:130] ! I0612 22:02:31.019519       1 policy_source.go:224] refreshing policies
	I0612 15:03:47.181506   13752 command_runner.go:130] ! I0612 22:02:31.020420       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0612 15:03:47.181506   13752 command_runner.go:130] ! I0612 22:02:31.091331       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0612 15:03:47.181588   13752 command_runner.go:130] ! I0612 22:02:31.909532       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0612 15:03:47.181588   13752 command_runner.go:130] ! W0612 22:02:32.355789       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.23.198.154 172.23.200.184]
	I0612 15:03:47.181588   13752 command_runner.go:130] ! I0612 22:02:32.358485       1 controller.go:615] quota admission added evaluator for: endpoints
	I0612 15:03:47.181588   13752 command_runner.go:130] ! I0612 22:02:32.377254       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0612 15:03:47.181588   13752 command_runner.go:130] ! I0612 22:02:33.727670       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0612 15:03:47.181588   13752 command_runner.go:130] ! I0612 22:02:34.008881       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0612 15:03:47.181652   13752 command_runner.go:130] ! I0612 22:02:34.034607       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0612 15:03:47.181677   13752 command_runner.go:130] ! I0612 22:02:34.157870       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0612 15:03:47.181703   13752 command_runner.go:130] ! I0612 22:02:34.176471       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0612 15:03:47.181703   13752 command_runner.go:130] ! W0612 22:02:52.350035       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.23.200.184]
	I0612 15:03:47.189065   13752 logs.go:123] Gathering logs for kube-controller-manager [7acc8ff0a931] ...
	I0612 15:03:47.189065   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7acc8ff0a931"
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:28.579013       1 serving.go:380] Generated self-signed cert in-memory
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:28.927149       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:28.927184       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:28.930688       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:28.932993       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:28.933167       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:28.933539       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:32.987820       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:32.988653       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:32.994458       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:32.995780       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:32.996873       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:33.005703       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:33.005720       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:33.006099       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:33.006120       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:33.011328       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:33.013199       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:33.013216       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0612 15:03:47.214369   13752 command_runner.go:130] ! W0612 22:02:33.045760       1 shared_informer.go:597] resyncPeriod 19h21m1.650821539s is smaller than resyncCheckPeriod 23h18m38.368150047s and the informer has already started. Changing it to 23h18m38.368150047s
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:33.046400       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0612 15:03:47.215082   13752 command_runner.go:130] ! I0612 22:02:33.046742       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0612 15:03:47.215174   13752 command_runner.go:130] ! I0612 22:02:33.047003       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0612 15:03:47.215174   13752 command_runner.go:130] ! I0612 22:02:33.047066       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0612 15:03:47.215174   13752 command_runner.go:130] ! I0612 22:02:33.047091       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0612 15:03:47.215225   13752 command_runner.go:130] ! I0612 22:02:33.047150       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0612 15:03:47.215225   13752 command_runner.go:130] ! I0612 22:02:33.047175       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0612 15:03:47.215279   13752 command_runner.go:130] ! I0612 22:02:33.047875       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0612 15:03:47.215321   13752 command_runner.go:130] ! I0612 22:02:33.048961       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.049070       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.049108       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.049132       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.049173       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.049188       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.049203       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.049218       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.049235       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.049307       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! W0612 22:02:33.049318       1 shared_informer.go:597] resyncPeriod 16h27m54.164006095s is smaller than resyncCheckPeriod 23h18m38.368150047s and the informer has already started. Changing it to 23h18m38.368150047s
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.049536       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.049616       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.049652       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.049852       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.049880       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.052188       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.075270       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.088124       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.088224       1 shared_informer.go:313] Waiting for caches to sync for job
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.088312       1 shared_informer.go:320] Caches are synced for tokens
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.092469       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.093016       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.093183       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.099173       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.099288       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.099302       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0612 15:03:47.215967   13752 command_runner.go:130] ! I0612 22:02:33.099269       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0612 15:03:47.215967   13752 command_runner.go:130] ! I0612 22:02:33.099467       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0612 15:03:47.215967   13752 command_runner.go:130] ! I0612 22:02:33.102279       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0612 15:03:47.216013   13752 command_runner.go:130] ! I0612 22:02:33.103692       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0612 15:03:47.216140   13752 command_runner.go:130] ! I0612 22:02:33.103797       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.109335       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.109737       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.109801       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.109811       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.113018       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.114442       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.114573       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.118932       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.118955       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.118979       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.119791       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.121411       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.119985       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.122332       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.122409       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.122432       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.122572       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.122710       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.122722       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.122748       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.132412       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.132517       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.132620       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.132660       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.132669       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.139478       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0612 15:03:47.216707   13752 command_runner.go:130] ! I0612 22:02:33.139854       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0612 15:03:47.216707   13752 command_runner.go:130] ! I0612 22:02:33.140261       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0612 15:03:47.216707   13752 command_runner.go:130] ! I0612 22:02:33.169621       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0612 15:03:47.216707   13752 command_runner.go:130] ! I0612 22:02:33.169819       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0612 15:03:47.216707   13752 command_runner.go:130] ! I0612 22:02:33.169849       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0612 15:03:47.216707   13752 command_runner.go:130] ! I0612 22:02:33.170074       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0612 15:03:47.216707   13752 command_runner.go:130] ! I0612 22:02:33.173816       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0612 15:03:47.216851   13752 command_runner.go:130] ! I0612 22:02:33.174120       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0612 15:03:47.216875   13752 command_runner.go:130] ! I0612 22:02:33.174130       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0612 15:03:47.216875   13752 command_runner.go:130] ! I0612 22:02:33.184678       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0612 15:03:47.216935   13752 command_runner.go:130] ! I0612 22:02:33.186030       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0612 15:03:47.216935   13752 command_runner.go:130] ! I0612 22:02:33.192152       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0612 15:03:47.216977   13752 command_runner.go:130] ! I0612 22:02:33.192257       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0612 15:03:47.217055   13752 command_runner.go:130] ! I0612 22:02:33.192268       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0612 15:03:47.217055   13752 command_runner.go:130] ! I0612 22:02:33.194361       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0612 15:03:47.217080   13752 command_runner.go:130] ! I0612 22:02:33.194659       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.194671       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.200378       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.200552       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.200579       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.203400       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.203797       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.203967       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.207566       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.207732       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.207743       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.207766       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.214389       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.214572       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.214655       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.220603       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.221181       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.222958       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0612 15:03:47.217110   13752 command_runner.go:130] ! E0612 22:02:33.228603       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.228994       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.253059       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.253281       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.253292       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.264081       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.266480       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.266606       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.266742       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0612 15:03:47.217638   13752 command_runner.go:130] ! I0612 22:02:33.380173       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0612 15:03:47.217638   13752 command_runner.go:130] ! I0612 22:02:33.380458       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0612 15:03:47.217638   13752 command_runner.go:130] ! I0612 22:02:33.380796       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0612 15:03:47.217638   13752 command_runner.go:130] ! I0612 22:02:33.398346       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0612 15:03:47.217638   13752 command_runner.go:130] ! I0612 22:02:33.401718       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0612 15:03:47.217638   13752 command_runner.go:130] ! I0612 22:02:33.401737       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0612 15:03:47.217638   13752 command_runner.go:130] ! I0612 22:02:33.495874       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0612 15:03:47.217638   13752 command_runner.go:130] ! I0612 22:02:33.496386       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0612 15:03:47.217638   13752 command_runner.go:130] ! I0612 22:02:33.498064       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0612 15:03:47.217638   13752 command_runner.go:130] ! I0612 22:02:33.698817       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0612 15:03:47.217838   13752 command_runner.go:130] ! I0612 22:02:33.699215       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0612 15:03:47.217838   13752 command_runner.go:130] ! I0612 22:02:33.699646       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0612 15:03:47.217838   13752 command_runner.go:130] ! I0612 22:02:33.744449       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0612 15:03:47.217926   13752 command_runner.go:130] ! I0612 22:02:33.744531       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0612 15:03:47.217926   13752 command_runner.go:130] ! I0612 22:02:33.744546       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0612 15:03:47.217986   13752 command_runner.go:130] ! E0612 22:02:33.807267       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:33.807295       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:33.856639       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:33.857088       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:33.857273       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:33.894016       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:33.896048       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:33.896083       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:33.950707       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:33.950731       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:33.950771       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:33.950821       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:33.950870       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:33.995005       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:33.995247       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:44.062766       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:44.063067       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:44.063362       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:44.063411       1 shared_informer.go:313] Waiting for caches to sync for node
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:44.068203       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:44.068603       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:44.068777       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:44.071309       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:44.071638       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:44.071795       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:44.080804       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:44.097810       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:47.218552   13752 command_runner.go:130] ! I0612 22:02:44.100018       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0612 15:03:47.218552   13752 command_runner.go:130] ! I0612 22:02:44.100030       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0612 15:03:47.218552   13752 command_runner.go:130] ! I0612 22:02:44.102193       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000\" does not exist"
	I0612 15:03:47.218552   13752 command_runner.go:130] ! I0612 22:02:44.102337       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m02\" does not exist"
	I0612 15:03:47.218552   13752 command_runner.go:130] ! I0612 22:02:44.102640       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:47.218552   13752 command_runner.go:130] ! I0612 22:02:44.102796       1 shared_informer.go:320] Caches are synced for TTL
	I0612 15:03:47.218740   13752 command_runner.go:130] ! I0612 22:02:44.102925       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m03\" does not exist"
	I0612 15:03:47.218740   13752 command_runner.go:130] ! I0612 22:02:44.102986       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:47.218740   13752 command_runner.go:130] ! I0612 22:02:44.113771       1 shared_informer.go:320] Caches are synced for GC
	I0612 15:03:47.218740   13752 command_runner.go:130] ! I0612 22:02:44.115010       1 shared_informer.go:320] Caches are synced for endpoint
	I0612 15:03:47.218823   13752 command_runner.go:130] ! I0612 22:02:44.115463       1 shared_informer.go:320] Caches are synced for cronjob
	I0612 15:03:47.218845   13752 command_runner.go:130] ! I0612 22:02:44.119062       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0612 15:03:47.218845   13752 command_runner.go:130] ! I0612 22:02:44.121259       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0612 15:03:47.218845   13752 command_runner.go:130] ! I0612 22:02:44.124526       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0612 15:03:47.218845   13752 command_runner.go:130] ! I0612 22:02:44.124650       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0612 15:03:47.218930   13752 command_runner.go:130] ! I0612 22:02:44.124971       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0612 15:03:47.218930   13752 command_runner.go:130] ! I0612 22:02:44.126246       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0612 15:03:47.218930   13752 command_runner.go:130] ! I0612 22:02:44.133682       1 shared_informer.go:320] Caches are synced for taint
	I0612 15:03:47.218930   13752 command_runner.go:130] ! I0612 22:02:44.134026       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.141044       1 shared_informer.go:320] Caches are synced for service account
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.145563       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.158513       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.162319       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000"
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.162613       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000-m02"
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.162653       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000-m03"
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.163186       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.164074       1 shared_informer.go:320] Caches are synced for node
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.164451       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.164672       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.164769       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.164780       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.167842       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.174384       1 shared_informer.go:320] Caches are synced for daemon sets
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.182521       1 shared_informer.go:320] Caches are synced for namespace
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.186460       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.194992       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.196327       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.196530       1 shared_informer.go:320] Caches are synced for job
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.196665       1 shared_informer.go:320] Caches are synced for deployment
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.200768       1 shared_informer.go:320] Caches are synced for HPA
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.200988       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.201846       1 shared_informer.go:320] Caches are synced for PV protection
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.207493       1 shared_informer.go:320] Caches are synced for crt configmap
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.228051       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="25.792655ms"
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.231633       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="89.306µs"
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.244808       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.644732ms"
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.246402       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.002µs"
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.297636       1 shared_informer.go:320] Caches are synced for PVC protection
	I0612 15:03:47.219513   13752 command_runner.go:130] ! I0612 22:02:44.304265       1 shared_informer.go:320] Caches are synced for stateful set
	I0612 15:03:47.219513   13752 command_runner.go:130] ! I0612 22:02:44.304486       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0612 15:03:47.219513   13752 command_runner.go:130] ! I0612 22:02:44.311023       1 shared_informer.go:320] Caches are synced for disruption
	I0612 15:03:47.219513   13752 command_runner.go:130] ! I0612 22:02:44.350865       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 15:03:47.219513   13752 command_runner.go:130] ! I0612 22:02:44.351039       1 shared_informer.go:320] Caches are synced for ephemeral
	I0612 15:03:47.219513   13752 command_runner.go:130] ! I0612 22:02:44.353535       1 shared_informer.go:320] Caches are synced for persistent volume
	I0612 15:03:47.219513   13752 command_runner.go:130] ! I0612 22:02:44.369296       1 shared_informer.go:320] Caches are synced for attach detach
	I0612 15:03:47.219513   13752 command_runner.go:130] ! I0612 22:02:44.372273       1 shared_informer.go:320] Caches are synced for expand
	I0612 15:03:47.219513   13752 command_runner.go:130] ! I0612 22:02:44.381442       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 15:03:47.219513   13752 command_runner.go:130] ! I0612 22:02:44.821842       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 15:03:47.219679   13752 command_runner.go:130] ! I0612 22:02:44.870923       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 15:03:47.219679   13752 command_runner.go:130] ! I0612 22:02:44.871005       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0612 15:03:47.219679   13752 command_runner.go:130] ! I0612 22:03:11.878868       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:47.219679   13752 command_runner.go:130] ! I0612 22:03:24.254264       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.921834ms"
	I0612 15:03:47.219782   13752 command_runner.go:130] ! I0612 22:03:24.256639       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.601µs"
	I0612 15:03:47.219782   13752 command_runner.go:130] ! I0612 22:03:37.832133       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="82.001µs"
	I0612 15:03:47.219837   13752 command_runner.go:130] ! I0612 22:03:37.905221       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.518825ms"
	I0612 15:03:47.219861   13752 command_runner.go:130] ! I0612 22:03:37.905853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.201µs"
	I0612 15:03:47.219890   13752 command_runner.go:130] ! I0612 22:03:37.917312       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.821108ms"
	I0612 15:03:47.219890   13752 command_runner.go:130] ! I0612 22:03:37.917472       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.3µs"
	I0612 15:03:47.236954   13752 logs.go:123] Gathering logs for kindnet [cccfd1e9fef5] ...
	I0612 15:03:47.236954   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccfd1e9fef5"
	I0612 15:03:47.261820   13752 command_runner.go:130] ! I0612 22:02:33.621070       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0612 15:03:47.261820   13752 command_runner.go:130] ! I0612 22:02:33.621857       1 main.go:107] hostIP = 172.23.200.184
	I0612 15:03:47.261820   13752 command_runner.go:130] ! podIP = 172.23.200.184
	I0612 15:03:47.264866   13752 command_runner.go:130] ! I0612 22:02:33.622055       1 main.go:116] setting mtu 1500 for CNI 
	I0612 15:03:47.264866   13752 command_runner.go:130] ! I0612 22:02:33.622069       1 main.go:146] kindnetd IP family: "ipv4"
	I0612 15:03:47.264866   13752 command_runner.go:130] ! I0612 22:02:33.622082       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0612 15:03:47.264866   13752 command_runner.go:130] ! I0612 22:03:03.928722       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0612 15:03:47.264866   13752 command_runner.go:130] ! I0612 22:03:03.948068       1 main.go:223] Handling node with IPs: map[172.23.200.184:{}]
	I0612 15:03:47.264866   13752 command_runner.go:130] ! I0612 22:03:03.948207       1 main.go:227] handling current node
	I0612 15:03:47.264866   13752 command_runner.go:130] ! I0612 22:03:04.015006       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.264866   13752 command_runner.go:130] ! I0612 22:03:04.015280       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.264866   13752 command_runner.go:130] ! I0612 22:03:04.015617       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.23.196.105 Flags: [] Table: 0} 
	I0612 15:03:47.264866   13752 command_runner.go:130] ! I0612 22:03:04.015960       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:47.264866   13752 command_runner.go:130] ! I0612 22:03:04.015976       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:47.265406   13752 command_runner.go:130] ! I0612 22:03:04.016053       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.23.206.72 Flags: [] Table: 0} 
	I0612 15:03:47.265406   13752 command_runner.go:130] ! I0612 22:03:14.032118       1 main.go:223] Handling node with IPs: map[172.23.200.184:{}]
	I0612 15:03:47.265406   13752 command_runner.go:130] ! I0612 22:03:14.032228       1 main.go:227] handling current node
	I0612 15:03:47.265406   13752 command_runner.go:130] ! I0612 22:03:14.032243       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.265406   13752 command_runner.go:130] ! I0612 22:03:14.032255       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.265406   13752 command_runner.go:130] ! I0612 22:03:14.032739       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:47.265406   13752 command_runner.go:130] ! I0612 22:03:14.032836       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:47.265406   13752 command_runner.go:130] ! I0612 22:03:24.045393       1 main.go:223] Handling node with IPs: map[172.23.200.184:{}]
	I0612 15:03:47.265557   13752 command_runner.go:130] ! I0612 22:03:24.045492       1 main.go:227] handling current node
	I0612 15:03:47.265603   13752 command_runner.go:130] ! I0612 22:03:24.045504       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.265603   13752 command_runner.go:130] ! I0612 22:03:24.045510       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.265603   13752 command_runner.go:130] ! I0612 22:03:24.045926       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:47.265603   13752 command_runner.go:130] ! I0612 22:03:24.045941       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:47.265603   13752 command_runner.go:130] ! I0612 22:03:34.052186       1 main.go:223] Handling node with IPs: map[172.23.200.184:{}]
	I0612 15:03:47.265603   13752 command_runner.go:130] ! I0612 22:03:34.052288       1 main.go:227] handling current node
	I0612 15:03:47.265690   13752 command_runner.go:130] ! I0612 22:03:34.052302       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.265690   13752 command_runner.go:130] ! I0612 22:03:34.052309       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.265690   13752 command_runner.go:130] ! I0612 22:03:34.052423       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:47.265690   13752 command_runner.go:130] ! I0612 22:03:34.052452       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:47.265750   13752 command_runner.go:130] ! I0612 22:03:44.068019       1 main.go:223] Handling node with IPs: map[172.23.200.184:{}]
	I0612 15:03:47.265750   13752 command_runner.go:130] ! I0612 22:03:44.068061       1 main.go:227] handling current node
	I0612 15:03:47.265802   13752 command_runner.go:130] ! I0612 22:03:44.068088       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.265802   13752 command_runner.go:130] ! I0612 22:03:44.068096       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.265802   13752 command_runner.go:130] ! I0612 22:03:44.068651       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:47.265802   13752 command_runner.go:130] ! I0612 22:03:44.068721       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:47.269354   13752 logs.go:123] Gathering logs for Docker ...
	I0612 15:03:47.269354   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0612 15:03:47.301252   13752 command_runner.go:130] > Jun 12 22:00:59 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:00:59 minikube cri-dockerd[222]: time="2024-06-12T22:00:59Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:00:59 minikube cri-dockerd[222]: time="2024-06-12T22:00:59Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:00:59 minikube cri-dockerd[222]: time="2024-06-12T22:00:59Z" level=info msg="Start docker client with request timeout 0s"
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:00:59 minikube cri-dockerd[222]: time="2024-06-12T22:00:59Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:00 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:00 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:00 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:02 minikube cri-dockerd[400]: time="2024-06-12T22:01:02Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:02 minikube cri-dockerd[400]: time="2024-06-12T22:01:02Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:02 minikube cri-dockerd[400]: time="2024-06-12T22:01:02Z" level=info msg="Start docker client with request timeout 0s"
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:02 minikube cri-dockerd[400]: time="2024-06-12T22:01:02Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:04 minikube cri-dockerd[420]: time="2024-06-12T22:01:04Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:04 minikube cri-dockerd[420]: time="2024-06-12T22:01:04Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:04 minikube cri-dockerd[420]: time="2024-06-12T22:01:04Z" level=info msg="Start docker client with request timeout 0s"
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:04 minikube cri-dockerd[420]: time="2024-06-12T22:01:04Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:07 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0612 15:03:47.301894   13752 command_runner.go:130] > Jun 12 22:01:07 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0612 15:03:47.301894   13752 command_runner.go:130] > Jun 12 22:01:07 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0612 15:03:47.301894   13752 command_runner.go:130] > Jun 12 22:01:07 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0612 15:03:47.301894   13752 command_runner.go:130] > Jun 12 22:01:07 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0612 15:03:47.301894   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 systemd[1]: Starting Docker Application Container Engine...
	I0612 15:03:47.302005   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[647]: time="2024-06-12T22:01:50.903212301Z" level=info msg="Starting up"
	I0612 15:03:47.302005   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[647]: time="2024-06-12T22:01:50.904075211Z" level=info msg="containerd not running, starting managed containerd"
	I0612 15:03:47.302048   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[647]: time="2024-06-12T22:01:50.905013523Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=653
	I0612 15:03:47.302077   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.936715611Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0612 15:03:47.302077   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.960715605Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0612 15:03:47.302077   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.960765806Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0612 15:03:47.302157   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.960836707Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0612 15:03:47.302157   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.961045509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.961654317Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.961681417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.961916220Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.962126123Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.962152723Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.962167223Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.962695730Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.963400938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.966083771Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.966199872Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.966330074Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.966461076Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.967039883Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.967257385Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.967282486Z" level=info msg="metadata content store policy set" policy=shared
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974400773Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974631276Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974732277Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974755077Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974771478Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974844078Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975137982Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975475986Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975634588Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0612 15:03:47.302737   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975657088Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0612 15:03:47.302737   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975672789Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0612 15:03:47.302737   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975691989Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0612 15:03:47.302737   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975721989Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0612 15:03:47.302737   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975744389Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0612 15:03:47.302737   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975762790Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0612 15:03:47.302892   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975776490Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0612 15:03:47.302892   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975789190Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0612 15:03:47.302892   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975800790Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0612 15:03:47.302987   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975819990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.302987   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975835091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.303048   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975847091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.303072   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975859491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.303101   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975870791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.303141   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975883291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.303141   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975894491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.303141   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975906891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.303211   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975920192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.303236   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975935492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975947192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975958792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975971092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975989492Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976009893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976030193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976044093Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976167595Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976210595Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976227295Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976239996Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976250696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976263096Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976273096Z" level=info msg="NRI interface is disabled by configuration."
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976489199Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976766002Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976819403Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976839003Z" level=info msg="containerd successfully booted in 0.042772s"
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:51 multinode-025000 dockerd[647]: time="2024-06-12T22:01:51.958896661Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.175284022Z" level=info msg="Loading containers: start."
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.600253538Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.679773678Z" level=info msg="Loading containers: done."
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.711890198Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.712661408Z" level=info msg="Daemon has completed initialization"
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.774658419Z" level=info msg="API listen on /var/run/docker.sock"
	I0612 15:03:47.303793   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.774960723Z" level=info msg="API listen on [::]:2376"
	I0612 15:03:47.303793   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 systemd[1]: Started Docker Application Container Engine.
	I0612 15:03:47.303793   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 dockerd[647]: time="2024-06-12T22:02:17.292813222Z" level=info msg="Processing signal 'terminated'"
	I0612 15:03:47.303793   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 systemd[1]: Stopping Docker Application Container Engine...
	I0612 15:03:47.303793   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 dockerd[647]: time="2024-06-12T22:02:17.294859626Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0612 15:03:47.303793   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 dockerd[647]: time="2024-06-12T22:02:17.295213927Z" level=info msg="Daemon shutdown complete"
	I0612 15:03:47.303949   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 dockerd[647]: time="2024-06-12T22:02:17.295258527Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0612 15:03:47.303974   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 dockerd[647]: time="2024-06-12T22:02:17.295281927Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0612 15:03:47.303974   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 systemd[1]: docker.service: Deactivated successfully.
	I0612 15:03:47.303974   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 systemd[1]: Stopped Docker Application Container Engine.
	I0612 15:03:47.304018   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 systemd[1]: Starting Docker Application Container Engine...
	I0612 15:03:47.304018   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:18.376333019Z" level=info msg="Starting up"
	I0612 15:03:47.304047   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:18.377520222Z" level=info msg="containerd not running, starting managed containerd"
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:18.378639425Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1050
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.412854304Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437361860Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437471260Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437558660Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437600861Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437638361Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437674061Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437957561Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.438006462Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.438028962Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.438041362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.438072362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.438209862Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441166869Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441307169Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441467569Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441599370Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441629870Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441648170Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441660470Z" level=info msg="metadata content store policy set" policy=shared
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442075271Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442166571Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442187871Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0612 15:03:47.304630   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442201971Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0612 15:03:47.304630   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442217371Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0612 15:03:47.304630   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442266071Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0612 15:03:47.304630   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442474372Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0612 15:03:47.304630   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442551072Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0612 15:03:47.304630   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442567272Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0612 15:03:47.304630   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442579372Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0612 15:03:47.304630   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442592672Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0612 15:03:47.304843   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442605072Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0612 15:03:47.304843   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442627672Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0612 15:03:47.304843   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442645772Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0612 15:03:47.304910   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442660172Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0612 15:03:47.304935   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442671872Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0612 15:03:47.304964   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442683572Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0612 15:03:47.305003   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442694372Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0612 15:03:47.305084   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442714572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.305084   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442727972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.305113   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442739972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.305113   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442754772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.305113   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442766572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.305174   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442778073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.305220   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442788873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.305220   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442800473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.305220   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442812673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442826373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442837973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442849073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442860373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442875173Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442974073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442994973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443006773Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443066573Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443088973Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443100473Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443113173Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443144073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443156573Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443166273Z" level=info msg="NRI interface is disabled by configuration."
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443418874Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443494174Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443534574Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0612 15:03:47.305822   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443571274Z" level=info msg="containerd successfully booted in 0.033238s"
	I0612 15:03:47.305822   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.419757425Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0612 15:03:47.305822   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.449018892Z" level=info msg="Loading containers: start."
	I0612 15:03:47.305822   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.739331061Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0612 15:03:47.305822   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.815989438Z" level=info msg="Loading containers: done."
	I0612 15:03:47.305947   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.842536299Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0612 15:03:47.305947   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.842674899Z" level=info msg="Daemon has completed initialization"
	I0612 15:03:47.305947   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.885012997Z" level=info msg="API listen on /var/run/docker.sock"
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.885608398Z" level=info msg="API listen on [::]:2376"
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 systemd[1]: Started Docker Application Container Engine.
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Start docker client with request timeout 0s"
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Loaded network plugin cni"
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Start cri-dockerd grpc backend"
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:25Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-vgcxw_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"894c58e9fe752e78b8e86cbbaabc1b6cc78ebcce37e4fc0bf1d838420f80a94d\""
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:25Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-45qqd_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"84a9b747663ca262bb35bb462ba83da0c104aee08928bd92a44297ee225d4c27\""
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.449365529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.449468129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.449499429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.449616229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.464315863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.464397563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.464444563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.464765264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.578440826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.581064832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.582145135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.306585   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.582532135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.306585   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.617373216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.306585   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.617486816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.306585   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.617504016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.306585   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.617593816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.306585   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/da184577f0371664d0a472b38bbfcfd866178308bf69eaabdaefb47d30a7057a/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:47.306743   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a228f6c30fdf44f53a40ac14a2a8b995155f743739957ac413c700924fc873ed/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:47.306743   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/20cbfb3fb853177b89366d165b6a1f67628b2c429266b77034ee6d1ca68b7bac/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:47.306743   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/76517193a960ab9d78db3449c72d4b8285bbf321f947b06f8088487d36423fd7/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:47.306840   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.094370315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.306840   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.094456516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.306898   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.094499716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.306925   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.094865116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.306925   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.162934973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.306925   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.163009674Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.306993   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.163029074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.306993   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.163177074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.306993   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.167659984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.307080   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.170028290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.307108   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.170289390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.307145   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.171053192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.307145   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.233482736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.307145   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.233861237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.307215   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.234167138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.307238   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.234578639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.307267   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:31Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0612 15:03:47.307302   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.197280978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.307302   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.198144480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.307302   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.198158780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.307369   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.198341381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.307393   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.213822116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.307421   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.213977717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.307455   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.214060117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.307455   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.214298317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.307455   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.234135963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.307455   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.234182263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.307607   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.234192563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.307692   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.234264863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/435c56b0fbbbb46e4b392ac6467c2054ce16271a6b3dad2d53f747f839b4b3cd/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5287b61207e62a3ec16408b08af503462a8bed945d441422fd0b733e752d6217/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.564394224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.564548725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.564602325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.565056126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.630517377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.630663477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.630850678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.635052387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a20975d81b350d77bb2d9d69d861d19ddbcbab33211643f61e2aaa0d6dc46a9d/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.972834166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.973545267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.974028469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.974235669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 dockerd[1044]: time="2024-06-12T22:03:03.121297409Z" level=info msg="ignoring event" container=3546a5c00321078fed32a806a318f4e56e89801ea54ea9463adf37f82327b38a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:03.122616734Z" level=info msg="shim disconnected" id=3546a5c00321078fed32a806a318f4e56e89801ea54ea9463adf37f82327b38a namespace=moby
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:03.123474651Z" level=warning msg="cleaning up after shim disconnected" id=3546a5c00321078fed32a806a318f4e56e89801ea54ea9463adf37f82327b38a namespace=moby
	I0612 15:03:47.308291   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:03.123682355Z" level=info msg="cleaning up dead shim" namespace=moby
	I0612 15:03:47.308291   13752 command_runner.go:130] > Jun 12 22:03:13 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:13.819634342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.308291   13752 command_runner.go:130] > Jun 12 22:03:13 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:13.819751243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.308402   13752 command_runner.go:130] > Jun 12 22:03:13 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:13.819788644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.308402   13752 command_runner.go:130] > Jun 12 22:03:13 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:13.820654753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.308402   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.004015440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.308402   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.004176540Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.308527   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.004193540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.308566   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.005298945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.308602   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.006561551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.308602   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.006633551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.308643   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.006681251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.308698   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.006796752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.308698   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:03:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/986567ef57643aec05ae5353795c364b380cb0f13c2ba98b1c4e04897e7b2e46/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:47.308698   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:03:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2434f89aefe0079002e81e136580c67ef1dca28bfa3b4c1e950241aea9663d4a/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0612 15:03:47.308698   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.542434894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.308698   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.542705495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.308698   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.542742195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.308698   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.543238997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.308698   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.606926167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.308698   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.606994167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.308698   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.607017268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.308698   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.607410069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.328874   13752 logs.go:123] Gathering logs for kubelet ...
	I0612 15:03:47.328874   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 15:03:47.366324   13752 command_runner.go:130] > Jun 12 22:02:21 multinode-025000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0612 15:03:47.366324   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1381]: I0612 22:02:22.063456    1381 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0612 15:03:47.366324   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1381]: I0612 22:02:22.064093    1381 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:47.366324   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1381]: I0612 22:02:22.064387    1381 server.go:927] "Client rotation is on, will bootstrap in background"
	I0612 15:03:47.366324   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1381]: E0612 22:02:22.065868    1381 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0612 15:03:47.366324   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0612 15:03:47.366324   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0612 15:03:47.366324   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0612 15:03:47.366324   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0612 15:03:47.366324   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0612 15:03:47.366324   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1437]: I0612 22:02:22.789327    1437 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0612 15:03:47.366324   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1437]: I0612 22:02:22.789465    1437 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:47.366324   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1437]: I0612 22:02:22.790480    1437 server.go:927] "Client rotation is on, will bootstrap in background"
	I0612 15:03:47.366324   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1437]: E0612 22:02:22.790564    1437 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0612 15:03:47.366852   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0612 15:03:47.366852   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0612 15:03:47.366938   13752 command_runner.go:130] > Jun 12 22:02:23 multinode-025000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0612 15:03:47.366938   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.414046    1517 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.414147    1517 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.414632    1517 server.go:927] "Client rotation is on, will bootstrap in background"
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.416608    1517 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.437750    1517 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.458497    1517 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.458849    1517 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.460038    1517 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.460095    1517 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-025000","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.464057    1517 topology_manager.go:138] "Creating topology manager with none policy"
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.464080    1517 container_manager_linux.go:301] "Creating device plugin manager"
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.464924    1517 state_mem.go:36] "Initialized new in-memory state store"
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.466519    1517 kubelet.go:400] "Attempting to sync node with API server"
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.466546    1517 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.466613    1517 kubelet.go:312] "Adding apiserver pod source"
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.467352    1517 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: W0612 22:02:25.471384    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-025000&limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.471502    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-025000&limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.471869    1517 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.1.4" apiVersion="v1"
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.477415    1517 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: W0612 22:02:25.478424    1517 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.480523    1517 server.go:1264] "Started kubelet"
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: W0612 22:02:25.481568    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:47.367517   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.481666    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:47.367592   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.481865    1517 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0612 15:03:47.367592   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.482789    1517 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0612 15:03:47.367592   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.485497    1517 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0612 15:03:47.367728   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.490040    1517 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.23.200.184:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-025000.17d860d995e00c7b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-025000,UID:multinode-025000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-025000,},FirstTimestamp:2024-06-12 22:02:25.480502395 +0000 UTC m=+0.149388345,LastTimestamp:2024-06-12 22:02:25.480502395 +0000 UTC m=+0.149388345,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-0
25000,}"
	I0612 15:03:47.367763   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.493219    1517 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0612 15:03:47.367763   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.495119    1517 server.go:455] "Adding debug handlers to kubelet server"
	I0612 15:03:47.367800   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.496095    1517 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0612 15:03:47.367836   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.498560    1517 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0612 15:03:47.367888   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.501388    1517 factory.go:221] Registration of the systemd container factory successfully
	I0612 15:03:47.367933   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.501556    1517 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0612 15:03:47.367933   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.501657    1517 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0612 15:03:47.368013   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: W0612 22:02:25.510641    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:47.368061   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.510706    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.521028    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-025000?timeout=10s\": dial tcp 172.23.200.184:8443: connect: connection refused" interval="200ms"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.554579    1517 reconciler.go:26] "Reconciler: start to sync state"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.594809    1517 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.595077    1517 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.595178    1517 state_mem.go:36] "Initialized new in-memory state store"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.598081    1517 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.598418    1517 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.598595    1517 policy_none.go:49] "None policy: Start"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.600760    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-025000"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.602144    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.200.184:8443: connect: connection refused" node="multinode-025000"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.610755    1517 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.610783    1517 state_mem.go:35] "Initializing new in-memory state store"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.610843    1517 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.611758    1517 state_mem.go:75] "Updated machine memory state"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.613995    1517 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.614216    1517 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.615027    1517 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.615636    1517 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.615685    1517 kubelet.go:2337] "Starting kubelet main sync loop"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.615730    1517 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.616221    1517 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: W0612 22:02:25.632621    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.632711    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.634150    1517 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-025000\" not found"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.644874    1517 iptables.go:577] "Could not set up iptables canary" err=<
	I0612 15:03:47.368611   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0612 15:03:47.368669   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0612 15:03:47.368669   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0612 15:03:47.368755   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.717070    1517 topology_manager.go:215] "Topology Admit Handler" podUID="d6071cd4356268889f798790dc93ce06" podNamespace="kube-system" podName="kube-apiserver-multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.719714    1517 topology_manager.go:215] "Topology Admit Handler" podUID="88de11d8b1aaec126153d44e87c4b5dd" podNamespace="kube-system" podName="kube-controller-manager-multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.720740    1517 topology_manager.go:215] "Topology Admit Handler" podUID="de62e7fd7d0feea82620e745032c1a67" podNamespace="kube-system" podName="kube-scheduler-multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.722295    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-025000?timeout=10s\": dial tcp 172.23.200.184:8443: connect: connection refused" interval="400ms"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.724629    1517 topology_manager.go:215] "Topology Admit Handler" podUID="7b6b5637642f3d915c0db1461c7074e6" podNamespace="kube-system" podName="etcd-multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.725657    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fad98f611536b15941d0f49c694b6b6c39318bca8a66620735a88a81a12d3610"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.725708    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb4351fab502e49592d49234119b810b53c5916eaf732d4ba148b3ad1eed4e6a"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.725720    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b9e051df48486e732da2c72bf2d0e3ec93cf8774632ecedd8825e656ba04a93"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.725728    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2784305b1d5e9a088f0b73ff004b2d9eca305d397de3d7b9912638323d7c66b2"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.725737    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40443305b24f54fea9235d98bfb16f2d550b8914bfa46c0592b5c24be1ad5569"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.736677    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9933fdc9ca72b65b57e5b4b996215763431b87f18af45fdc8195252497e1d9a"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.760928    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="894c58e9fe752e78b8e86cbbaabc1b6cc78ebcce37e4fc0bf1d838420f80a94d"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.777475    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84a9b747663ca262bb35bb462ba83da0c104aee08928bd92a44297ee225d4c27"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.794474    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92f2d5f19e95ea2d1cfe140159a55c94f5d809c3b67661196b1e285ac389537f"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.803790    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.804820    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.200.184:8443: connect: connection refused" node="multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885533    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/88de11d8b1aaec126153d44e87c4b5dd-ca-certs\") pod \"kube-controller-manager-multinode-025000\" (UID: \"88de11d8b1aaec126153d44e87c4b5dd\") " pod="kube-system/kube-controller-manager-multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885705    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d6071cd4356268889f798790dc93ce06-ca-certs\") pod \"kube-apiserver-multinode-025000\" (UID: \"d6071cd4356268889f798790dc93ce06\") " pod="kube-system/kube-apiserver-multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885746    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d6071cd4356268889f798790dc93ce06-k8s-certs\") pod \"kube-apiserver-multinode-025000\" (UID: \"d6071cd4356268889f798790dc93ce06\") " pod="kube-system/kube-apiserver-multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885768    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/88de11d8b1aaec126153d44e87c4b5dd-k8s-certs\") pod \"kube-controller-manager-multinode-025000\" (UID: \"88de11d8b1aaec126153d44e87c4b5dd\") " pod="kube-system/kube-controller-manager-multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885803    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/88de11d8b1aaec126153d44e87c4b5dd-kubeconfig\") pod \"kube-controller-manager-multinode-025000\" (UID: \"88de11d8b1aaec126153d44e87c4b5dd\") " pod="kube-system/kube-controller-manager-multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885844    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/88de11d8b1aaec126153d44e87c4b5dd-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-025000\" (UID: \"88de11d8b1aaec126153d44e87c4b5dd\") " pod="kube-system/kube-controller-manager-multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885869    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/de62e7fd7d0feea82620e745032c1a67-kubeconfig\") pod \"kube-scheduler-multinode-025000\" (UID: \"de62e7fd7d0feea82620e745032c1a67\") " pod="kube-system/kube-scheduler-multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885941    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/7b6b5637642f3d915c0db1461c7074e6-etcd-certs\") pod \"etcd-multinode-025000\" (UID: \"7b6b5637642f3d915c0db1461c7074e6\") " pod="kube-system/etcd-multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885970    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/7b6b5637642f3d915c0db1461c7074e6-etcd-data\") pod \"etcd-multinode-025000\" (UID: \"7b6b5637642f3d915c0db1461c7074e6\") " pod="kube-system/etcd-multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885997    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d6071cd4356268889f798790dc93ce06-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-025000\" (UID: \"d6071cd4356268889f798790dc93ce06\") " pod="kube-system/kube-apiserver-multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.886023    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/88de11d8b1aaec126153d44e87c4b5dd-flexvolume-dir\") pod \"kube-controller-manager-multinode-025000\" (UID: \"88de11d8b1aaec126153d44e87c4b5dd\") " pod="kube-system/kube-controller-manager-multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.124157    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-025000?timeout=10s\": dial tcp 172.23.200.184:8443: connect: connection refused" interval="800ms"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: I0612 22:02:26.206204    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.207259    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.200.184:8443: connect: connection refused" node="multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: W0612 22:02:26.576346    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-025000&limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.576490    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-025000&limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: W0612 22:02:26.832319    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.832430    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: W0612 22:02:26.847085    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.847226    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: W0612 22:02:26.894179    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.894251    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: I0612 22:02:26.910045    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76517193a960ab9d78db3449c72d4b8285bbf321f947b06f8088487d36423fd7"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.925848    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-025000?timeout=10s\": dial tcp 172.23.200.184:8443: connect: connection refused" interval="1.6s"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.967442    1517 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.23.200.184:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-025000.17d860d995e00c7b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-025000,UID:multinode-025000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-025000,},FirstTimestamp:2024-06-12 22:02:25.480502395 +0000 UTC m=+0.149388345,LastTimestamp:2024-06-12 22:02:25.480502395 +0000 UTC m=+0.149388345,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-0
25000,}"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 kubelet[1517]: I0612 22:02:27.008640    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 kubelet[1517]: E0612 22:02:27.009541    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.200.184:8443: connect: connection refused" node="multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:28 multinode-025000 kubelet[1517]: I0612 22:02:28.611782    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.067503    1517 kubelet_node_status.go:112] "Node was previously registered" node="multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.069193    1517 kubelet_node_status.go:76] "Successfully registered node" node="multinode-025000"
	I0612 15:03:47.370206   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.078543    1517 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0612 15:03:47.370235   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.083746    1517 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0612 15:03:47.370235   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.087512    1517 setters.go:580] "Node became not ready" node="multinode-025000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-12T22:02:31Z","lastTransitionTime":"2024-06-12T22:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0612 15:03:47.370235   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.485482    1517 apiserver.go:52] "Watching apiserver"
	I0612 15:03:47.370235   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.491838    1517 topology_manager.go:215] "Topology Admit Handler" podUID="1f004a05-3f5f-444b-9ac0-88f0e23da904" podNamespace="kube-system" podName="kindnet-bqlg8"
	I0612 15:03:47.370235   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.492246    1517 topology_manager.go:215] "Topology Admit Handler" podUID="10b24fa7-8eea-4fbb-ab18-404e853aa7ab" podNamespace="kube-system" podName="kube-proxy-47lr8"
	I0612 15:03:47.370401   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.493249    1517 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-025000" podUID="6b429685-b322-4b00-83fc-743786ff40e1"
	I0612 15:03:47.370463   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.494355    1517 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-025000" podUID="630bafc4-4576-4974-b638-7ab52dcfec18"
	I0612 15:03:47.370463   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.494642    1517 topology_manager.go:215] "Topology Admit Handler" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-vgcxw"
	I0612 15:03:47.370463   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.494763    1517 topology_manager.go:215] "Topology Admit Handler" podUID="d20f7489-1aa1-44b8-9221-4d1849884be4" podNamespace="kube-system" podName="storage-provisioner"
	I0612 15:03:47.370463   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.494876    1517 topology_manager.go:215] "Topology Admit Handler" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4" podNamespace="default" podName="busybox-fc5497c4f-45qqd"
	I0612 15:03:47.370463   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.495127    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.370463   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.495306    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.370463   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.499353    1517 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0612 15:03:47.371218   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.541672    1517 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-025000"
	I0612 15:03:47.371218   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.557538    1517 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-025000"
	I0612 15:03:47.371218   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593012    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1f004a05-3f5f-444b-9ac0-88f0e23da904-cni-cfg\") pod \"kindnet-bqlg8\" (UID: \"1f004a05-3f5f-444b-9ac0-88f0e23da904\") " pod="kube-system/kindnet-bqlg8"
	I0612 15:03:47.371218   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593075    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10b24fa7-8eea-4fbb-ab18-404e853aa7ab-lib-modules\") pod \"kube-proxy-47lr8\" (UID: \"10b24fa7-8eea-4fbb-ab18-404e853aa7ab\") " pod="kube-system/kube-proxy-47lr8"
	I0612 15:03:47.371218   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593188    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f004a05-3f5f-444b-9ac0-88f0e23da904-lib-modules\") pod \"kindnet-bqlg8\" (UID: \"1f004a05-3f5f-444b-9ac0-88f0e23da904\") " pod="kube-system/kindnet-bqlg8"
	I0612 15:03:47.371218   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593684    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d20f7489-1aa1-44b8-9221-4d1849884be4-tmp\") pod \"storage-provisioner\" (UID: \"d20f7489-1aa1-44b8-9221-4d1849884be4\") " pod="kube-system/storage-provisioner"
	I0612 15:03:47.371218   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593711    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f004a05-3f5f-444b-9ac0-88f0e23da904-xtables-lock\") pod \"kindnet-bqlg8\" (UID: \"1f004a05-3f5f-444b-9ac0-88f0e23da904\") " pod="kube-system/kindnet-bqlg8"
	I0612 15:03:47.371218   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593752    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10b24fa7-8eea-4fbb-ab18-404e853aa7ab-xtables-lock\") pod \"kube-proxy-47lr8\" (UID: \"10b24fa7-8eea-4fbb-ab18-404e853aa7ab\") " pod="kube-system/kube-proxy-47lr8"
	I0612 15:03:47.371218   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.594460    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:47.371218   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.594613    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:02:32.094549489 +0000 UTC m=+6.763435539 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:47.371218   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.622682    1517 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04dcbc8e258f964f689941b6844769d9" path="/var/lib/kubelet/pods/04dcbc8e258f964f689941b6844769d9/volumes"
	I0612 15:03:47.371218   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.623801    1517 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="610414aa8160848c0b6b79ea0a700b83" path="/var/lib/kubelet/pods/610414aa8160848c0b6b79ea0a700b83/volumes"
	I0612 15:03:47.371218   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.626972    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.371218   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.627014    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.371748   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.627132    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:02:32.127114564 +0000 UTC m=+6.796000614 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.371748   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.673848    1517 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-025000" podStartSLOduration=0.673800971 podStartE2EDuration="673.800971ms" podCreationTimestamp="2024-06-12 22:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-12 22:02:31.632162175 +0000 UTC m=+6.301048225" watchObservedRunningTime="2024-06-12 22:02:31.673800971 +0000 UTC m=+6.342686921"
	I0612 15:03:47.371908   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.674234    1517 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-025000" podStartSLOduration=0.674226172 podStartE2EDuration="674.226172ms" podCreationTimestamp="2024-06-12 22:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-12 22:02:31.67337587 +0000 UTC m=+6.342261920" watchObservedRunningTime="2024-06-12 22:02:31.674226172 +0000 UTC m=+6.343112222"
	I0612 15:03:47.371924   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: E0612 22:02:32.099190    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:47.372002   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: E0612 22:02:32.099284    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:02:33.099266752 +0000 UTC m=+7.768152702 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:47.372002   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: E0612 22:02:32.199774    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.372104   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: E0612 22:02:32.199808    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.372131   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: E0612 22:02:32.199864    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:02:33.199845384 +0000 UTC m=+7.868731334 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.372131   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: I0612 22:02:32.394461    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5287b61207e62a3ec16408b08af503462a8bed945d441422fd0b733e752d6217"
	I0612 15:03:47.372131   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: I0612 22:02:32.774495    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a20975d81b350d77bb2d9d69d861d19ddbcbab33211643f61e2aaa0d6dc46a9d"
	I0612 15:03:47.372131   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: I0612 22:02:32.791274    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="435c56b0fbbbb46e4b392ac6467c2054ce16271a6b3dad2d53f747f839b4b3cd"
	I0612 15:03:47.372131   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.106313    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:47.372131   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.106394    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:02:35.106375874 +0000 UTC m=+9.775261924 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:47.372131   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.208318    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.372131   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.208375    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.372131   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.208431    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:02:35.208413609 +0000 UTC m=+9.877299559 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.372131   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.617822    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.372131   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.618103    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.372131   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.125562    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:47.372131   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.126376    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:02:39.12633293 +0000 UTC m=+13.795218980 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:47.372131   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.226548    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.372131   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.226607    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.372654   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.226693    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:02:39.226674161 +0000 UTC m=+13.895560111 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.372654   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.616712    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.372737   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.617047    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:47.372737   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.617270    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.372737   13752 command_runner.go:130] > Jun 12 22:02:37 multinode-025000 kubelet[1517]: E0612 22:02:37.618147    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.372737   13752 command_runner.go:130] > Jun 12 22:02:37 multinode-025000 kubelet[1517]: E0612 22:02:37.618607    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.372737   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.164650    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:47.372737   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.164956    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:02:47.164935524 +0000 UTC m=+21.833821574 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:47.372737   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.265764    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.372737   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.266004    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.372737   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.266086    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:02:47.266062158 +0000 UTC m=+21.934948208 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.372737   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.616548    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.372737   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.617577    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.372737   13752 command_runner.go:130] > Jun 12 22:02:40 multinode-025000 kubelet[1517]: E0612 22:02:40.619032    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:47.372737   13752 command_runner.go:130] > Jun 12 22:02:41 multinode-025000 kubelet[1517]: E0612 22:02:41.617010    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.372737   13752 command_runner.go:130] > Jun 12 22:02:41 multinode-025000 kubelet[1517]: E0612 22:02:41.617816    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.373271   13752 command_runner.go:130] > Jun 12 22:02:43 multinode-025000 kubelet[1517]: E0612 22:02:43.617105    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.373271   13752 command_runner.go:130] > Jun 12 22:02:43 multinode-025000 kubelet[1517]: E0612 22:02:43.617755    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.373271   13752 command_runner.go:130] > Jun 12 22:02:45 multinode-025000 kubelet[1517]: E0612 22:02:45.617112    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.373271   13752 command_runner.go:130] > Jun 12 22:02:45 multinode-025000 kubelet[1517]: E0612 22:02:45.618034    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.373271   13752 command_runner.go:130] > Jun 12 22:02:45 multinode-025000 kubelet[1517]: E0612 22:02:45.621402    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:47.373471   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.234271    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:47.373471   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.234420    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:03:03.234402815 +0000 UTC m=+37.903288765 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:47.373558   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.335532    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.373558   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.335632    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.373634   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.335696    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:03:03.33568009 +0000 UTC m=+38.004566140 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.373700   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.617048    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.373700   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.617530    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.373784   13752 command_runner.go:130] > Jun 12 22:02:49 multinode-025000 kubelet[1517]: E0612 22:02:49.617040    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.373862   13752 command_runner.go:130] > Jun 12 22:02:49 multinode-025000 kubelet[1517]: E0612 22:02:49.617673    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.373887   13752 command_runner.go:130] > Jun 12 22:02:50 multinode-025000 kubelet[1517]: E0612 22:02:50.623368    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:47.373994   13752 command_runner.go:130] > Jun 12 22:02:51 multinode-025000 kubelet[1517]: E0612 22:02:51.616848    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.373994   13752 command_runner.go:130] > Jun 12 22:02:51 multinode-025000 kubelet[1517]: E0612 22:02:51.617656    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.373994   13752 command_runner.go:130] > Jun 12 22:02:53 multinode-025000 kubelet[1517]: E0612 22:02:53.617130    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.373994   13752 command_runner.go:130] > Jun 12 22:02:53 multinode-025000 kubelet[1517]: E0612 22:02:53.617679    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.373994   13752 command_runner.go:130] > Jun 12 22:02:55 multinode-025000 kubelet[1517]: E0612 22:02:55.617082    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.373994   13752 command_runner.go:130] > Jun 12 22:02:55 multinode-025000 kubelet[1517]: E0612 22:02:55.617595    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.373994   13752 command_runner.go:130] > Jun 12 22:02:55 multinode-025000 kubelet[1517]: E0612 22:02:55.624795    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:47.373994   13752 command_runner.go:130] > Jun 12 22:02:57 multinode-025000 kubelet[1517]: E0612 22:02:57.617430    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.373994   13752 command_runner.go:130] > Jun 12 22:02:57 multinode-025000 kubelet[1517]: E0612 22:02:57.618180    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.373994   13752 command_runner.go:130] > Jun 12 22:02:59 multinode-025000 kubelet[1517]: E0612 22:02:59.616577    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.373994   13752 command_runner.go:130] > Jun 12 22:02:59 multinode-025000 kubelet[1517]: E0612 22:02:59.617339    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.373994   13752 command_runner.go:130] > Jun 12 22:03:00 multinode-025000 kubelet[1517]: E0612 22:03:00.626741    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:47.373994   13752 command_runner.go:130] > Jun 12 22:03:01 multinode-025000 kubelet[1517]: E0612 22:03:01.617176    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.373994   13752 command_runner.go:130] > Jun 12 22:03:01 multinode-025000 kubelet[1517]: E0612 22:03:01.617573    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.373994   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: I0612 22:03:03.236005    1517 scope.go:117] "RemoveContainer" containerID="61910369e0d4ba1a5246a686e904c168fc7467d239e475004146ddf2835e8e78"
	I0612 15:03:47.373994   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: I0612 22:03:03.236962    1517 scope.go:117] "RemoveContainer" containerID="3546a5c00321078fed32a806a318f4e56e89801ea54ea9463adf37f82327b38a"
	I0612 15:03:47.374516   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.239739    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d20f7489-1aa1-44b8-9221-4d1849884be4)\"" pod="kube-system/storage-provisioner" podUID="d20f7489-1aa1-44b8-9221-4d1849884be4"
	I0612 15:03:47.374682   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.284341    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.284420    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:03:35.284401461 +0000 UTC m=+69.953287411 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.385432    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.385531    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.385613    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:03:35.385594617 +0000 UTC m=+70.054480667 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.616668    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.617100    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:05 multinode-025000 kubelet[1517]: E0612 22:03:05.617214    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:05 multinode-025000 kubelet[1517]: E0612 22:03:05.617674    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:05 multinode-025000 kubelet[1517]: E0612 22:03:05.628542    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:07 multinode-025000 kubelet[1517]: E0612 22:03:07.616455    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:07 multinode-025000 kubelet[1517]: E0612 22:03:07.617581    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:09 multinode-025000 kubelet[1517]: E0612 22:03:09.617093    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:09 multinode-025000 kubelet[1517]: E0612 22:03:09.617405    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:13 multinode-025000 kubelet[1517]: I0612 22:03:13.617647    1517 scope.go:117] "RemoveContainer" containerID="3546a5c00321078fed32a806a318f4e56e89801ea54ea9463adf37f82327b38a"
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]: I0612 22:03:25.637114    1517 scope.go:117] "RemoveContainer" containerID="0749f44d03561395230c8a60a41853a49502741bf3bcd45acc924d346061f5b0"
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]: E0612 22:03:25.663119    1517 iptables.go:577] "Could not set up iptables canary" err=<
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0612 15:03:47.375237   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0612 15:03:47.375237   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0612 15:03:47.375237   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]: I0612 22:03:25.699754    1517 scope.go:117] "RemoveContainer" containerID="2455f315465b9508a3fe1025d7150342eedb3cb09eb5f8fd9b2cbbffe1306db0"
	I0612 15:03:47.408558   13752 logs.go:123] Gathering logs for dmesg ...
	I0612 15:03:47.408558   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 15:03:47.438236   13752 command_runner.go:130] > [Jun12 22:00] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.131000] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.025099] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.064850] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.023448] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0612 15:03:47.438236   13752 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +5.508165] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +1.342262] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +1.269809] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +7.259362] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0612 15:03:47.438236   13752 command_runner.go:130] > [Jun12 22:01] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.155290] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	I0612 15:03:47.438236   13752 command_runner.go:130] > [Jun12 22:02] systemd-fstab-generator[971]: Ignoring "noauto" option for root device
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.095843] kauditd_printk_skb: 73 callbacks suppressed
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.507476] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.171390] systemd-fstab-generator[1022]: Ignoring "noauto" option for root device
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.210222] systemd-fstab-generator[1036]: Ignoring "noauto" option for root device
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +2.904531] systemd-fstab-generator[1224]: Ignoring "noauto" option for root device
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.189304] systemd-fstab-generator[1237]: Ignoring "noauto" option for root device
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.162041] systemd-fstab-generator[1248]: Ignoring "noauto" option for root device
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.261611] systemd-fstab-generator[1263]: Ignoring "noauto" option for root device
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.815328] systemd-fstab-generator[1374]: Ignoring "noauto" option for root device
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.096217] kauditd_printk_skb: 205 callbacks suppressed
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +3.646175] systemd-fstab-generator[1510]: Ignoring "noauto" option for root device
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +1.441935] kauditd_printk_skb: 54 callbacks suppressed
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +5.624550] kauditd_printk_skb: 20 callbacks suppressed
	I0612 15:03:47.439238   13752 command_runner.go:130] > [  +3.644538] systemd-fstab-generator[2322]: Ignoring "noauto" option for root device
	I0612 15:03:47.439238   13752 command_runner.go:130] > [  +8.250122] kauditd_printk_skb: 70 callbacks suppressed
	I0612 15:03:47.441204   13752 logs.go:123] Gathering logs for etcd [6b61f5f6483d] ...
	I0612 15:03:47.441204   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61f5f6483d"
	I0612 15:03:47.466449   13752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-12T22:02:27.594582Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0612 15:03:47.469533   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.595941Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.23.200.184:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.23.200.184:2380","--initial-cluster=multinode-025000=https://172.23.200.184:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.23.200.184:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.23.200.184:2380","--name=multinode-025000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0612 15:03:47.469582   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.596165Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0612 15:03:47.469637   13752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-12T22:02:27.596271Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0612 15:03:47.469684   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.596356Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.23.200.184:2380"]}
	I0612 15:03:47.469765   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.596492Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0612 15:03:47.469799   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.611167Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.23.200.184:2379"]}
	I0612 15:03:47.469851   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.613093Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-025000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.23.200.184:2380"],"listen-peer-urls":["https://172.23.200.184:2380"],"advertise-client-urls":["https://172.23.200.184:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.23.200.184:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"in
itial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0612 15:03:47.469851   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.643295Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"27.151363ms"}
	I0612 15:03:47.469986   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.674268Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0612 15:03:47.470033   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.702241Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"a7fa2563dcb4b7b8","local-member-id":"b93ef5bd064a9684","commit-index":2039}
	I0612 15:03:47.470033   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.702551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 switched to configuration voters=()"}
	I0612 15:03:47.470084   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.702585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 became follower at term 2"}
	I0612 15:03:47.470084   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.70261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft b93ef5bd064a9684 [peers: [], term: 2, commit: 2039, applied: 0, lastindex: 2039, lastterm: 2]"}
	I0612 15:03:47.470147   13752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-12T22:02:27.719372Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0612 15:03:47.470203   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.724082Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1403}
	I0612 15:03:47.470233   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.735755Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1769}
	I0612 15:03:47.470233   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.743333Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0612 15:03:47.470233   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.753311Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"b93ef5bd064a9684","timeout":"7s"}
	I0612 15:03:47.470233   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.755587Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"b93ef5bd064a9684"}
	I0612 15:03:47.470233   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.755671Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"b93ef5bd064a9684","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0612 15:03:47.470233   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.758078Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0612 15:03:47.470233   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.758939Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0612 15:03:47.470233   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.759011Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0612 15:03:47.470233   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.759115Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0612 15:03:47.470760   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.759495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 switched to configuration voters=(13348376537775904388)"}
	I0612 15:03:47.470760   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.759589Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a7fa2563dcb4b7b8","local-member-id":"b93ef5bd064a9684","added-peer-id":"b93ef5bd064a9684","added-peer-peer-urls":["https://172.23.198.154:2380"]}
	I0612 15:03:47.470760   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.760197Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a7fa2563dcb4b7b8","local-member-id":"b93ef5bd064a9684","cluster-version":"3.5"}
	I0612 15:03:47.470858   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.761198Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0612 15:03:47.470858   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.764395Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0612 15:03:47.470996   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.765492Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b93ef5bd064a9684","initial-advertise-peer-urls":["https://172.23.200.184:2380"],"listen-peer-urls":["https://172.23.200.184:2380"],"advertise-client-urls":["https://172.23.200.184:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.23.200.184:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0612 15:03:47.471051   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.766195Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0612 15:03:47.471098   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.766744Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.23.200.184:2380"}
	I0612 15:03:47.471098   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.767384Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.23.200.184:2380"}
	I0612 15:03:47.471124   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503194Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 is starting a new election at term 2"}
	I0612 15:03:47.471124   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.50332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 became pre-candidate at term 2"}
	I0612 15:03:47.471124   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503351Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 received MsgPreVoteResp from b93ef5bd064a9684 at term 2"}
	I0612 15:03:47.471124   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 became candidate at term 3"}
	I0612 15:03:47.471124   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 received MsgVoteResp from b93ef5bd064a9684 at term 3"}
	I0612 15:03:47.471124   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 became leader at term 3"}
	I0612 15:03:47.471124   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b93ef5bd064a9684 elected leader b93ef5bd064a9684 at term 3"}
	I0612 15:03:47.471124   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.511068Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0612 15:03:47.471124   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.511381Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0612 15:03:47.471124   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.511069Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"b93ef5bd064a9684","local-member-attributes":"{Name:multinode-025000 ClientURLs:[https://172.23.200.184:2379]}","request-path":"/0/members/b93ef5bd064a9684/attributes","cluster-id":"a7fa2563dcb4b7b8","publish-timeout":"7s"}
	I0612 15:03:47.471124   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.512996Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0612 15:03:47.471124   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.513243Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0612 15:03:47.471124   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.514729Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0612 15:03:47.471124   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.515422Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.23.200.184:2379"}
	I0612 15:03:47.477975   13752 logs.go:123] Gathering logs for kube-scheduler [755750ecd1e3] ...
	I0612 15:03:47.477975   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 755750ecd1e3"
	I0612 15:03:47.502405   13752 command_runner.go:130] ! I0612 22:02:28.771072       1 serving.go:380] Generated self-signed cert in-memory
	I0612 15:03:47.504030   13752 command_runner.go:130] ! W0612 22:02:31.003959       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0612 15:03:47.504089   13752 command_runner.go:130] ! W0612 22:02:31.004072       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:47.504157   13752 command_runner.go:130] ! W0612 22:02:31.004087       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0612 15:03:47.504157   13752 command_runner.go:130] ! W0612 22:02:31.004098       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0612 15:03:47.504196   13752 command_runner.go:130] ! I0612 22:02:31.034273       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0612 15:03:47.504196   13752 command_runner.go:130] ! I0612 22:02:31.034440       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:47.504196   13752 command_runner.go:130] ! I0612 22:02:31.039288       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0612 15:03:47.504196   13752 command_runner.go:130] ! I0612 22:02:31.039331       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0612 15:03:47.504196   13752 command_runner.go:130] ! I0612 22:02:31.039699       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0612 15:03:47.504196   13752 command_runner.go:130] ! I0612 22:02:31.040018       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 15:03:47.504196   13752 command_runner.go:130] ! I0612 22:02:31.139849       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0612 15:03:47.506197   13752 logs.go:123] Gathering logs for kube-scheduler [6b021c195669] ...
	I0612 15:03:47.506197   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b021c195669"
	I0612 15:03:47.533859   13752 command_runner.go:130] ! I0612 21:39:26.474423       1 serving.go:380] Generated self-signed cert in-memory
	I0612 15:03:47.533859   13752 command_runner.go:130] ! W0612 21:39:28.263287       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0612 15:03:47.537866   13752 command_runner.go:130] ! W0612 21:39:28.263543       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:47.537866   13752 command_runner.go:130] ! W0612 21:39:28.263706       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0612 15:03:47.537942   13752 command_runner.go:130] ! W0612 21:39:28.263849       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0612 15:03:47.537974   13752 command_runner.go:130] ! I0612 21:39:28.303051       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0612 15:03:47.537974   13752 command_runner.go:130] ! I0612 21:39:28.305840       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:47.538040   13752 command_runner.go:130] ! I0612 21:39:28.310682       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0612 15:03:47.538071   13752 command_runner.go:130] ! I0612 21:39:28.312812       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0612 15:03:47.538071   13752 command_runner.go:130] ! I0612 21:39:28.313421       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0612 15:03:47.538071   13752 command_runner.go:130] ! I0612 21:39:28.313594       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 15:03:47.538907   13752 command_runner.go:130] ! W0612 21:39:28.336905       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:47.539034   13752 command_runner.go:130] ! E0612 21:39:28.337826       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:47.539034   13752 command_runner.go:130] ! W0612 21:39:28.338227       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0612 15:03:47.539034   13752 command_runner.go:130] ! E0612 21:39:28.338391       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0612 15:03:47.539112   13752 command_runner.go:130] ! W0612 21:39:28.338652       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0612 15:03:47.539112   13752 command_runner.go:130] ! E0612 21:39:28.338896       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0612 15:03:47.539112   13752 command_runner.go:130] ! W0612 21:39:28.339195       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0612 15:03:47.539195   13752 command_runner.go:130] ! E0612 21:39:28.339406       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0612 15:03:47.539220   13752 command_runner.go:130] ! W0612 21:39:28.339694       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0612 15:03:47.539267   13752 command_runner.go:130] ! E0612 21:39:28.339892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0612 15:03:47.539267   13752 command_runner.go:130] ! W0612 21:39:28.340188       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0612 15:03:47.539348   13752 command_runner.go:130] ! E0612 21:39:28.340362       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0612 15:03:47.539376   13752 command_runner.go:130] ! W0612 21:39:28.340697       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:47.539376   13752 command_runner.go:130] ! E0612 21:39:28.341129       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:47.539376   13752 command_runner.go:130] ! W0612 21:39:28.341447       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:47.539495   13752 command_runner.go:130] ! E0612 21:39:28.341664       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:47.539495   13752 command_runner.go:130] ! W0612 21:39:28.341989       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0612 15:03:47.539495   13752 command_runner.go:130] ! E0612 21:39:28.342229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0612 15:03:47.539574   13752 command_runner.go:130] ! W0612 21:39:28.342540       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:47.539574   13752 command_runner.go:130] ! E0612 21:39:28.344839       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:47.539668   13752 command_runner.go:130] ! W0612 21:39:28.345316       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0612 15:03:47.539741   13752 command_runner.go:130] ! E0612 21:39:28.347872       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0612 15:03:47.539741   13752 command_runner.go:130] ! W0612 21:39:28.345596       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:47.539859   13752 command_runner.go:130] ! W0612 21:39:28.345651       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0612 15:03:47.539859   13752 command_runner.go:130] ! W0612 21:39:28.345691       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0612 15:03:47.539859   13752 command_runner.go:130] ! W0612 21:39:28.345823       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0612 15:03:47.539966   13752 command_runner.go:130] ! E0612 21:39:28.348490       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:47.539966   13752 command_runner.go:130] ! E0612 21:39:28.348742       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0612 15:03:47.540065   13752 command_runner.go:130] ! E0612 21:39:28.349066       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0612 15:03:47.540065   13752 command_runner.go:130] ! E0612 21:39:28.349147       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0612 15:03:47.540135   13752 command_runner.go:130] ! W0612 21:39:29.192073       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0612 15:03:47.540135   13752 command_runner.go:130] ! E0612 21:39:29.192126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0612 15:03:47.540135   13752 command_runner.go:130] ! W0612 21:39:29.249000       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:47.540217   13752 command_runner.go:130] ! E0612 21:39:29.249248       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:47.540279   13752 command_runner.go:130] ! W0612 21:39:29.268880       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0612 15:03:47.540279   13752 command_runner.go:130] ! E0612 21:39:29.268972       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0612 15:03:47.540279   13752 command_runner.go:130] ! W0612 21:39:29.271696       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:47.540279   13752 command_runner.go:130] ! E0612 21:39:29.271839       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:47.540470   13752 command_runner.go:130] ! W0612 21:39:29.275489       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0612 15:03:47.540610   13752 command_runner.go:130] ! E0612 21:39:29.275551       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0612 15:03:47.540610   13752 command_runner.go:130] ! W0612 21:39:29.296739       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:47.540610   13752 command_runner.go:130] ! E0612 21:39:29.297145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:47.540610   13752 command_runner.go:130] ! W0612 21:39:29.433593       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0612 15:03:47.540610   13752 command_runner.go:130] ! E0612 21:39:29.433887       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0612 15:03:47.540610   13752 command_runner.go:130] ! W0612 21:39:29.471880       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0612 15:03:47.540610   13752 command_runner.go:130] ! E0612 21:39:29.471994       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0612 15:03:47.540610   13752 command_runner.go:130] ! W0612 21:39:29.482669       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:47.540610   13752 command_runner.go:130] ! E0612 21:39:29.483008       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:47.540610   13752 command_runner.go:130] ! W0612 21:39:29.569402       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0612 15:03:47.540610   13752 command_runner.go:130] ! E0612 21:39:29.571433       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0612 15:03:47.540610   13752 command_runner.go:130] ! W0612 21:39:29.677906       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0612 15:03:47.540610   13752 command_runner.go:130] ! E0612 21:39:29.677950       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0612 15:03:47.541146   13752 command_runner.go:130] ! W0612 21:39:29.687951       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0612 15:03:47.541146   13752 command_runner.go:130] ! E0612 21:39:29.688054       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0612 15:03:47.541146   13752 command_runner.go:130] ! W0612 21:39:29.780288       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0612 15:03:47.541146   13752 command_runner.go:130] ! E0612 21:39:29.780411       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0612 15:03:47.541377   13752 command_runner.go:130] ! W0612 21:39:29.832564       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0612 15:03:47.541377   13752 command_runner.go:130] ! E0612 21:39:29.832892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0612 15:03:47.541377   13752 command_runner.go:130] ! W0612 21:39:29.889591       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:47.541453   13752 command_runner.go:130] ! E0612 21:39:29.889868       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:47.541485   13752 command_runner.go:130] ! I0612 21:39:32.513980       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0612 15:03:47.541522   13752 command_runner.go:130] ! E0612 22:00:01.172050       1 run.go:74] "command failed" err="finished without leader elect"
	I0612 15:03:47.552398   13752 logs.go:123] Gathering logs for container status ...
	I0612 15:03:47.552398   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 15:03:47.624621   13752 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0612 15:03:47.624621   13752 command_runner.go:130] > f2a949d407287       8c811b4aec35f                                                                                         11 seconds ago       Running             busybox                   1                   2434f89aefe00       busybox-fc5497c4f-45qqd
	I0612 15:03:47.624621   13752 command_runner.go:130] > 26e5daf354e36       cbb01a7bd410d                                                                                         11 seconds ago       Running             coredns                   1                   986567ef57643       coredns-7db6d8ff4d-vgcxw
	I0612 15:03:47.624621   13752 command_runner.go:130] > 448e057077ddc       6e38f40d628db                                                                                         34 seconds ago       Running             storage-provisioner       2                   5287b61207e62       storage-provisioner
	I0612 15:03:47.624621   13752 command_runner.go:130] > cccfd1e9fef5e       ac1c61439df46                                                                                         About a minute ago   Running             kindnet-cni               1                   a20975d81b350       kindnet-bqlg8
	I0612 15:03:47.624621   13752 command_runner.go:130] > 3546a5c003210       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   5287b61207e62       storage-provisioner
	I0612 15:03:47.624621   13752 command_runner.go:130] > 227a905829b07       747097150317f                                                                                         About a minute ago   Running             kube-proxy                1                   435c56b0fbbbb       kube-proxy-47lr8
	I0612 15:03:47.624621   13752 command_runner.go:130] > 6b61f5f6483d5       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   76517193a960a       etcd-multinode-025000
	I0612 15:03:47.624621   13752 command_runner.go:130] > bbe2d2e51b5f3       91be940803172                                                                                         About a minute ago   Running             kube-apiserver            0                   20cbfb3fb8531       kube-apiserver-multinode-025000
	I0612 15:03:47.624621   13752 command_runner.go:130] > 7acc8ff0a9317       25a1387cdab82                                                                                         About a minute ago   Running             kube-controller-manager   1                   a228f6c30fdf4       kube-controller-manager-multinode-025000
	I0612 15:03:47.624621   13752 command_runner.go:130] > 755750ecd1e39       a52dc94f0a912                                                                                         About a minute ago   Running             kube-scheduler            1                   da184577f0371       kube-scheduler-multinode-025000
	I0612 15:03:47.624621   13752 command_runner.go:130] > bfc0382d49a48       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   84a9b747663ca       busybox-fc5497c4f-45qqd
	I0612 15:03:47.625152   13752 command_runner.go:130] > e83cf4eef49e4       cbb01a7bd410d                                                                                         23 minutes ago       Exited              coredns                   0                   894c58e9fe752       coredns-7db6d8ff4d-vgcxw
	I0612 15:03:47.625152   13752 command_runner.go:130] > 4d60d82f6bc5d       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              23 minutes ago       Exited              kindnet-cni               0                   92f2d5f19e95e       kindnet-bqlg8
	I0612 15:03:47.625152   13752 command_runner.go:130] > c4842faba751e       747097150317f                                                                                         24 minutes ago       Exited              kube-proxy                0                   fad98f611536b       kube-proxy-47lr8
	I0612 15:03:47.625152   13752 command_runner.go:130] > 6b021c195669e       a52dc94f0a912                                                                                         24 minutes ago       Exited              kube-scheduler            0                   d9933fdc9ca72       kube-scheduler-multinode-025000
	I0612 15:03:47.625335   13752 command_runner.go:130] > 685d167da53c9       25a1387cdab82                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   bb4351fab502e       kube-controller-manager-multinode-025000
	I0612 15:03:47.627616   13752 logs.go:123] Gathering logs for kube-proxy [227a905829b0] ...
	I0612 15:03:47.627648   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227a905829b0"
	I0612 15:03:47.658208   13752 command_runner.go:130] ! I0612 22:02:33.538961       1 server_linux.go:69] "Using iptables proxy"
	I0612 15:03:47.658208   13752 command_runner.go:130] ! I0612 22:02:33.585761       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.200.184"]
	I0612 15:03:47.658208   13752 command_runner.go:130] ! I0612 22:02:33.754056       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 15:03:47.658208   13752 command_runner.go:130] ! I0612 22:02:33.754118       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 15:03:47.658208   13752 command_runner.go:130] ! I0612 22:02:33.754141       1 server_linux.go:165] "Using iptables Proxier"
	I0612 15:03:47.658208   13752 command_runner.go:130] ! I0612 22:02:33.765449       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 15:03:47.658208   13752 command_runner.go:130] ! I0612 22:02:33.766192       1 server.go:872] "Version info" version="v1.30.1"
	I0612 15:03:47.658208   13752 command_runner.go:130] ! I0612 22:02:33.766246       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:47.658208   13752 command_runner.go:130] ! I0612 22:02:33.769980       1 config.go:192] "Starting service config controller"
	I0612 15:03:47.658208   13752 command_runner.go:130] ! I0612 22:02:33.770461       1 config.go:101] "Starting endpoint slice config controller"
	I0612 15:03:47.658208   13752 command_runner.go:130] ! I0612 22:02:33.770493       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 15:03:47.658208   13752 command_runner.go:130] ! I0612 22:02:33.770630       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 15:03:47.658208   13752 command_runner.go:130] ! I0612 22:02:33.773852       1 config.go:319] "Starting node config controller"
	I0612 15:03:47.658208   13752 command_runner.go:130] ! I0612 22:02:33.773944       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 15:03:47.658208   13752 command_runner.go:130] ! I0612 22:02:33.870743       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0612 15:03:47.658208   13752 command_runner.go:130] ! I0612 22:02:33.870698       1 shared_informer.go:320] Caches are synced for service config
	I0612 15:03:47.658208   13752 command_runner.go:130] ! I0612 22:02:33.882534       1 shared_informer.go:320] Caches are synced for node config
	I0612 15:03:47.660659   13752 logs.go:123] Gathering logs for kube-proxy [c4842faba751] ...
	I0612 15:03:47.660659   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4842faba751"
	I0612 15:03:47.678563   13752 command_runner.go:130] ! I0612 21:39:47.407607       1 server_linux.go:69] "Using iptables proxy"
	I0612 15:03:47.678563   13752 command_runner.go:130] ! I0612 21:39:47.423801       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.198.154"]
	I0612 15:03:47.678563   13752 command_runner.go:130] ! I0612 21:39:47.480061       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 15:03:47.678563   13752 command_runner.go:130] ! I0612 21:39:47.480182       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 15:03:47.678563   13752 command_runner.go:130] ! I0612 21:39:47.480205       1 server_linux.go:165] "Using iptables Proxier"
	I0612 15:03:47.678563   13752 command_runner.go:130] ! I0612 21:39:47.484521       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 15:03:47.678563   13752 command_runner.go:130] ! I0612 21:39:47.485171       1 server.go:872] "Version info" version="v1.30.1"
	I0612 15:03:47.678563   13752 command_runner.go:130] ! I0612 21:39:47.485535       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:47.678563   13752 command_runner.go:130] ! I0612 21:39:47.488126       1 config.go:192] "Starting service config controller"
	I0612 15:03:47.678563   13752 command_runner.go:130] ! I0612 21:39:47.488162       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 15:03:47.678563   13752 command_runner.go:130] ! I0612 21:39:47.488188       1 config.go:101] "Starting endpoint slice config controller"
	I0612 15:03:47.678563   13752 command_runner.go:130] ! I0612 21:39:47.488197       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 15:03:47.689867   13752 command_runner.go:130] ! I0612 21:39:47.488969       1 config.go:319] "Starting node config controller"
	I0612 15:03:47.690111   13752 command_runner.go:130] ! I0612 21:39:47.489001       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 15:03:47.690111   13752 command_runner.go:130] ! I0612 21:39:47.588500       1 shared_informer.go:320] Caches are synced for service config
	I0612 15:03:47.690111   13752 command_runner.go:130] ! I0612 21:39:47.588641       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0612 15:03:47.690111   13752 command_runner.go:130] ! I0612 21:39:47.589226       1 shared_informer.go:320] Caches are synced for node config
	I0612 15:03:47.691875   13752 logs.go:123] Gathering logs for kube-controller-manager [685d167da53c] ...
	I0612 15:03:47.691875   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 685d167da53c"
	I0612 15:03:47.717904   13752 command_runner.go:130] ! I0612 21:39:26.275086       1 serving.go:380] Generated self-signed cert in-memory
	I0612 15:03:47.717904   13752 command_runner.go:130] ! I0612 21:39:26.758419       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0612 15:03:47.717904   13752 command_runner.go:130] ! I0612 21:39:26.759036       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:47.718822   13752 command_runner.go:130] ! I0612 21:39:26.761311       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0612 15:03:47.718822   13752 command_runner.go:130] ! I0612 21:39:26.761663       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0612 15:03:47.718822   13752 command_runner.go:130] ! I0612 21:39:26.762454       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0612 15:03:47.718822   13752 command_runner.go:130] ! I0612 21:39:26.762652       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 15:03:47.718898   13752 command_runner.go:130] ! I0612 21:39:31.260969       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0612 15:03:47.718944   13752 command_runner.go:130] ! I0612 21:39:31.261096       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0612 15:03:47.718944   13752 command_runner.go:130] ! E0612 21:39:31.316508       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0612 15:03:47.718995   13752 command_runner.go:130] ! I0612 21:39:31.316587       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0612 15:03:47.719024   13752 command_runner.go:130] ! I0612 21:39:31.342032       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0612 15:03:47.719024   13752 command_runner.go:130] ! I0612 21:39:31.342287       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0612 15:03:47.719024   13752 command_runner.go:130] ! I0612 21:39:31.342304       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0612 15:03:47.719024   13752 command_runner.go:130] ! I0612 21:39:31.362243       1 shared_informer.go:320] Caches are synced for tokens
	I0612 15:03:47.719093   13752 command_runner.go:130] ! I0612 21:39:31.399024       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0612 15:03:47.719121   13752 command_runner.go:130] ! I0612 21:39:31.399081       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0612 15:03:47.719121   13752 command_runner.go:130] ! I0612 21:39:31.399264       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0612 15:03:47.719121   13752 command_runner.go:130] ! I0612 21:39:31.443376       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0612 15:03:47.719121   13752 command_runner.go:130] ! I0612 21:39:31.443603       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0612 15:03:47.719185   13752 command_runner.go:130] ! I0612 21:39:31.443617       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0612 15:03:47.719185   13752 command_runner.go:130] ! I0612 21:39:31.480477       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0612 15:03:47.719185   13752 command_runner.go:130] ! I0612 21:39:31.480993       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0612 15:03:47.719185   13752 command_runner.go:130] ! I0612 21:39:31.481007       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0612 15:03:47.719244   13752 command_runner.go:130] ! I0612 21:39:31.523943       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0612 15:03:47.719273   13752 command_runner.go:130] ! I0612 21:39:31.524182       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.524535       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.524741       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.553194       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.554412       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.556852       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.560273       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.560448       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.561614       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.561933       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.593308       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.593438       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.593459       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.593488       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.593534       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.593588       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.593611       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.593650       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.593684       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.593701       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.593721       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.593739       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.593950       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.594051       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.594202       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.594262       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.594286       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0612 15:03:47.719838   13752 command_runner.go:130] ! I0612 21:39:31.594306       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0612 15:03:47.719838   13752 command_runner.go:130] ! I0612 21:39:31.594500       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0612 15:03:47.719838   13752 command_runner.go:130] ! I0612 21:39:31.594602       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0612 15:03:47.719838   13752 command_runner.go:130] ! I0612 21:39:31.594857       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0612 15:03:47.719838   13752 command_runner.go:130] ! I0612 21:39:31.594957       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0612 15:03:47.719838   13752 command_runner.go:130] ! I0612 21:39:31.595276       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0612 15:03:47.719838   13752 command_runner.go:130] ! I0612 21:39:31.595463       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0612 15:03:47.719838   13752 command_runner.go:130] ! I0612 21:39:31.605247       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0612 15:03:47.719977   13752 command_runner.go:130] ! I0612 21:39:31.605722       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0612 15:03:47.719977   13752 command_runner.go:130] ! I0612 21:39:31.607199       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0612 15:03:47.719977   13752 command_runner.go:130] ! I0612 21:39:31.668704       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0612 15:03:47.720054   13752 command_runner.go:130] ! I0612 21:39:31.669329       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0612 15:03:47.720054   13752 command_runner.go:130] ! I0612 21:39:31.669521       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0612 15:03:47.720054   13752 command_runner.go:130] ! I0612 21:39:31.820968       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0612 15:03:47.720054   13752 command_runner.go:130] ! I0612 21:39:31.821104       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0612 15:03:47.720054   13752 command_runner.go:130] ! I0612 21:39:31.821117       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0612 15:03:47.720128   13752 command_runner.go:130] ! I0612 21:39:31.973500       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0612 15:03:47.720128   13752 command_runner.go:130] ! I0612 21:39:31.973543       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0612 15:03:47.720128   13752 command_runner.go:130] ! I0612 21:39:31.975344       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0612 15:03:47.720202   13752 command_runner.go:130] ! I0612 21:39:31.975377       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0612 15:03:47.720227   13752 command_runner.go:130] ! I0612 21:39:32.163715       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0612 15:03:47.720227   13752 command_runner.go:130] ! I0612 21:39:32.163860       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0612 15:03:47.720291   13752 command_runner.go:130] ! I0612 21:39:32.320380       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0612 15:03:47.720314   13752 command_runner.go:130] ! I0612 21:39:32.320516       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0612 15:03:47.720314   13752 command_runner.go:130] ! I0612 21:39:32.320529       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0612 15:03:47.720314   13752 command_runner.go:130] ! I0612 21:39:32.468817       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0612 15:03:47.720382   13752 command_runner.go:130] ! I0612 21:39:32.468893       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0612 15:03:47.720382   13752 command_runner.go:130] ! I0612 21:39:32.636144       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0612 15:03:47.720382   13752 command_runner.go:130] ! I0612 21:39:32.636921       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0612 15:03:47.720382   13752 command_runner.go:130] ! I0612 21:39:32.637331       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0612 15:03:47.720457   13752 command_runner.go:130] ! I0612 21:39:32.775300       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0612 15:03:47.720487   13752 command_runner.go:130] ! I0612 21:39:32.776007       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0612 15:03:47.720487   13752 command_runner.go:130] ! I0612 21:39:32.778803       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0612 15:03:47.720487   13752 command_runner.go:130] ! I0612 21:39:32.920254       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0612 15:03:47.720487   13752 command_runner.go:130] ! I0612 21:39:32.920359       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0612 15:03:47.720558   13752 command_runner.go:130] ! I0612 21:39:32.920902       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0612 15:03:47.720558   13752 command_runner.go:130] ! I0612 21:39:33.069533       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0612 15:03:47.720618   13752 command_runner.go:130] ! I0612 21:39:33.069689       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0612 15:03:47.720618   13752 command_runner.go:130] ! I0612 21:39:33.069704       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0612 15:03:47.720676   13752 command_runner.go:130] ! I0612 21:39:33.069713       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0612 15:03:47.720676   13752 command_runner.go:130] ! I0612 21:39:33.115693       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0612 15:03:47.720730   13752 command_runner.go:130] ! I0612 21:39:33.115796       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0612 15:03:47.720730   13752 command_runner.go:130] ! I0612 21:39:33.115809       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0612 15:03:47.720730   13752 command_runner.go:130] ! I0612 21:39:33.116021       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0612 15:03:47.720730   13752 command_runner.go:130] ! I0612 21:39:33.116257       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0612 15:03:47.720804   13752 command_runner.go:130] ! I0612 21:39:33.116416       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0612 15:03:47.720829   13752 command_runner.go:130] ! I0612 21:39:33.169481       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:33.169523       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:33.169561       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:33.170619       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:33.170693       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:33.170745       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:33.171426       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:33.171458       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:33.171479       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:33.172032       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:33.172160       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:33.172352       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:33.172295       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:43.229790       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:43.230104       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:43.230715       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:43.230868       1 shared_informer.go:313] Waiting for caches to sync for node
	I0612 15:03:47.720860   13752 command_runner.go:130] ! E0612 21:39:43.246433       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:43.246740       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:43.246878       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:43.247178       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:43.259694       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:43.260105       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:43.260326       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:43.287038       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:43.287747       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:43.289545       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:43.296881       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:43.297485       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:43.297679       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:43.315673       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0612 15:03:47.721390   13752 command_runner.go:130] ! I0612 21:39:43.316362       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0612 15:03:47.721390   13752 command_runner.go:130] ! I0612 21:39:43.316724       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0612 15:03:47.721390   13752 command_runner.go:130] ! I0612 21:39:43.331329       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0612 15:03:47.721390   13752 command_runner.go:130] ! I0612 21:39:43.331610       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0612 15:03:47.721390   13752 command_runner.go:130] ! I0612 21:39:43.331966       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0612 15:03:47.721390   13752 command_runner.go:130] ! I0612 21:39:43.358081       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0612 15:03:47.721390   13752 command_runner.go:130] ! I0612 21:39:43.358485       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0612 15:03:47.721390   13752 command_runner.go:130] ! I0612 21:39:43.358595       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0612 15:03:47.721547   13752 command_runner.go:130] ! I0612 21:39:43.358609       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0612 15:03:47.721547   13752 command_runner.go:130] ! I0612 21:39:43.373221       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0612 15:03:47.721547   13752 command_runner.go:130] ! I0612 21:39:43.373371       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0612 15:03:47.721547   13752 command_runner.go:130] ! I0612 21:39:43.373388       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0612 15:03:47.721642   13752 command_runner.go:130] ! I0612 21:39:43.386049       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0612 15:03:47.721683   13752 command_runner.go:130] ! I0612 21:39:43.386265       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0612 15:03:47.721683   13752 command_runner.go:130] ! I0612 21:39:43.387457       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0612 15:03:47.721683   13752 command_runner.go:130] ! I0612 21:39:43.473855       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0612 15:03:47.721683   13752 command_runner.go:130] ! I0612 21:39:43.474115       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0612 15:03:47.721755   13752 command_runner.go:130] ! I0612 21:39:43.474421       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0612 15:03:47.721755   13752 command_runner.go:130] ! I0612 21:39:43.622457       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0612 15:03:47.721755   13752 command_runner.go:130] ! I0612 21:39:43.622831       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0612 15:03:47.721820   13752 command_runner.go:130] ! I0612 21:39:43.622950       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0612 15:03:47.721820   13752 command_runner.go:130] ! I0612 21:39:43.776632       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:43.777149       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:43.777203       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:43.923199       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:43.923416       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:43.923557       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.219008       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.219041       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.219093       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.219104       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.375322       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.375879       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.375896       1 shared_informer.go:313] Waiting for caches to sync for job
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.419335       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.419357       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.419672       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.435364       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.441191       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000\" does not exist"
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.456985       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.457052       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.460648       1 shared_informer.go:320] Caches are synced for GC
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.463138       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.469825       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.469846       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.469856       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.471608       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.471748       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.472789       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.474041       1 shared_informer.go:320] Caches are synced for TTL
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.475483       1 shared_informer.go:320] Caches are synced for PVC protection
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.475505       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.476080       1 shared_informer.go:320] Caches are synced for job
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.479252       1 shared_informer.go:320] Caches are synced for ephemeral
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.481788       1 shared_informer.go:320] Caches are synced for service account
	I0612 15:03:47.722476   13752 command_runner.go:130] ! I0612 21:39:44.488300       1 shared_informer.go:320] Caches are synced for persistent volume
	I0612 15:03:47.722476   13752 command_runner.go:130] ! I0612 21:39:44.491059       1 shared_informer.go:320] Caches are synced for namespace
	I0612 15:03:47.722476   13752 command_runner.go:130] ! I0612 21:39:44.499063       1 shared_informer.go:320] Caches are synced for cronjob
	I0612 15:03:47.722476   13752 command_runner.go:130] ! I0612 21:39:44.500304       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0612 15:03:47.722476   13752 command_runner.go:130] ! I0612 21:39:44.507471       1 shared_informer.go:320] Caches are synced for daemon sets
	I0612 15:03:47.722476   13752 command_runner.go:130] ! I0612 21:39:44.525355       1 shared_informer.go:320] Caches are synced for taint
	I0612 15:03:47.722476   13752 command_runner.go:130] ! I0612 21:39:44.525889       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0612 15:03:47.722476   13752 command_runner.go:130] ! I0612 21:39:44.526177       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000"
	I0612 15:03:47.722761   13752 command_runner.go:130] ! I0612 21:39:44.526390       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0612 15:03:47.722761   13752 command_runner.go:130] ! I0612 21:39:44.526550       1 shared_informer.go:320] Caches are synced for HPA
	I0612 15:03:47.722761   13752 command_runner.go:130] ! I0612 21:39:44.526951       1 shared_informer.go:320] Caches are synced for stateful set
	I0612 15:03:47.722761   13752 command_runner.go:130] ! I0612 21:39:44.527038       1 shared_informer.go:320] Caches are synced for deployment
	I0612 15:03:47.722834   13752 command_runner.go:130] ! I0612 21:39:44.528601       1 shared_informer.go:320] Caches are synced for PV protection
	I0612 15:03:47.722861   13752 command_runner.go:130] ! I0612 21:39:44.528834       1 shared_informer.go:320] Caches are synced for crt configmap
	I0612 15:03:47.722861   13752 command_runner.go:130] ! I0612 21:39:44.531261       1 shared_informer.go:320] Caches are synced for node
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:44.531462       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:44.531679       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:44.531942       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:44.532097       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:44.532523       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:44.537873       1 shared_informer.go:320] Caches are synced for expand
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:44.543447       1 shared_informer.go:320] Caches are synced for attach detach
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:44.564610       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:44.568950       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025000" podCIDRs=["10.244.0.0/24"]
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:44.621264       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:44.644803       1 shared_informer.go:320] Caches are synced for endpoint
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:44.677466       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:44.696400       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:44.723303       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:44.735837       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:44.758870       1 shared_informer.go:320] Caches are synced for disruption
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:45.157877       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:45.226557       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:45.226973       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0612 15:03:47.723446   13752 command_runner.go:130] ! I0612 21:39:45.795416       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="243.746414ms"
	I0612 15:03:47.723446   13752 command_runner.go:130] ! I0612 21:39:45.868449       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.90937ms"
	I0612 15:03:47.723446   13752 command_runner.go:130] ! I0612 21:39:45.868845       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="122.402µs"
	I0612 15:03:47.723446   13752 command_runner.go:130] ! I0612 21:39:45.869382       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="206.903µs"
	I0612 15:03:47.723446   13752 command_runner.go:130] ! I0612 21:39:45.905402       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="386.807µs"
	I0612 15:03:47.723572   13752 command_runner.go:130] ! I0612 21:39:46.349409       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="105.452815ms"
	I0612 15:03:47.723599   13752 command_runner.go:130] ! I0612 21:39:46.386321       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.301621ms"
	I0612 15:03:47.723599   13752 command_runner.go:130] ! I0612 21:39:46.386974       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="616.309µs"
	I0612 15:03:47.723599   13752 command_runner.go:130] ! I0612 21:39:56.441072       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="366.601µs"
	I0612 15:03:47.723685   13752 command_runner.go:130] ! I0612 21:39:56.465727       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.4µs"
	I0612 15:03:47.723685   13752 command_runner.go:130] ! I0612 21:39:57.870560       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="68.5µs"
	I0612 15:03:47.723685   13752 command_runner.go:130] ! I0612 21:39:58.874445       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.448319ms"
	I0612 15:03:47.723780   13752 command_runner.go:130] ! I0612 21:39:58.875168       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="103.901µs"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:39:59.529553       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:42:39.169243       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m02\" does not exist"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:42:39.188142       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025000-m02" podCIDRs=["10.244.1.0/24"]
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:42:39.563565       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000-m02"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:42:58.063730       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:43:24.138579       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.052538ms"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:43:24.156190       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.434267ms"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:43:24.156677       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.099µs"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:43:24.183391       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.299µs"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:43:26.908415       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.051448ms"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:43:26.908853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34µs"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:43:27.296932       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.474956ms"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:43:27.304566       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.488944ms"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:47:16.485552       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:47:16.486568       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m03\" does not exist"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:47:16.503987       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025000-m03" podCIDRs=["10.244.2.0/24"]
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:47:19.629018       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000-m03"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:47:35.032365       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:55:19.767980       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:57:52.374240       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:57:58.774442       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m03\" does not exist"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:57:58.774588       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:57:58.809041       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025000-m03" podCIDRs=["10.244.3.0/24"]
	I0612 15:03:47.724355   13752 command_runner.go:130] ! I0612 21:58:06.126407       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:47.724355   13752 command_runner.go:130] ! I0612 21:59:45.222238       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:47.738653   13752 logs.go:123] Gathering logs for describe nodes ...
	I0612 15:03:47.738653   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0612 15:03:47.956842   13752 command_runner.go:130] > Name:               multinode-025000
	I0612 15:03:47.956842   13752 command_runner.go:130] > Roles:              control-plane
	I0612 15:03:47.956842   13752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0612 15:03:47.956842   13752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0612 15:03:47.956842   13752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0612 15:03:47.956842   13752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-025000
	I0612 15:03:47.956842   13752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0612 15:03:47.956842   13752 command_runner.go:130] >                     minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97
	I0612 15:03:47.956842   13752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-025000
	I0612 15:03:47.956842   13752 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0612 15:03:47.956842   13752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_12T14_39_32_0700
	I0612 15:03:47.956842   13752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0612 15:03:47.956842   13752 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0612 15:03:47.956842   13752 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0612 15:03:47.956842   13752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0612 15:03:47.956842   13752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0612 15:03:47.956842   13752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0612 15:03:47.956842   13752 command_runner.go:130] > CreationTimestamp:  Wed, 12 Jun 2024 21:39:28 +0000
	I0612 15:03:47.956842   13752 command_runner.go:130] > Taints:             <none>
	I0612 15:03:47.956842   13752 command_runner.go:130] > Unschedulable:      false
	I0612 15:03:47.956842   13752 command_runner.go:130] > Lease:
	I0612 15:03:47.956842   13752 command_runner.go:130] >   HolderIdentity:  multinode-025000
	I0612 15:03:47.956842   13752 command_runner.go:130] >   AcquireTime:     <unset>
	I0612 15:03:47.956842   13752 command_runner.go:130] >   RenewTime:       Wed, 12 Jun 2024 22:03:42 +0000
	I0612 15:03:47.956842   13752 command_runner.go:130] > Conditions:
	I0612 15:03:47.956842   13752 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0612 15:03:47.956842   13752 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0612 15:03:47.956842   13752 command_runner.go:130] >   MemoryPressure   False   Wed, 12 Jun 2024 22:03:11 +0000   Wed, 12 Jun 2024 21:39:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0612 15:03:47.956842   13752 command_runner.go:130] >   DiskPressure     False   Wed, 12 Jun 2024 22:03:11 +0000   Wed, 12 Jun 2024 21:39:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0612 15:03:47.956842   13752 command_runner.go:130] >   PIDPressure      False   Wed, 12 Jun 2024 22:03:11 +0000   Wed, 12 Jun 2024 21:39:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0612 15:03:47.956842   13752 command_runner.go:130] >   Ready            True    Wed, 12 Jun 2024 22:03:11 +0000   Wed, 12 Jun 2024 22:03:11 +0000   KubeletReady                 kubelet is posting ready status
	I0612 15:03:47.956842   13752 command_runner.go:130] > Addresses:
	I0612 15:03:47.957371   13752 command_runner.go:130] >   InternalIP:  172.23.200.184
	I0612 15:03:47.957371   13752 command_runner.go:130] >   Hostname:    multinode-025000
	I0612 15:03:47.957371   13752 command_runner.go:130] > Capacity:
	I0612 15:03:47.957371   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:47.957371   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:47.957371   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:47.957371   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:47.957500   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:47.957500   13752 command_runner.go:130] > Allocatable:
	I0612 15:03:47.957500   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:47.957500   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:47.957500   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:47.957500   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:47.957500   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:47.957500   13752 command_runner.go:130] > System Info:
	I0612 15:03:47.957556   13752 command_runner.go:130] >   Machine ID:                 e65e28dfa5bf4f27a0123e4ae1007793
	I0612 15:03:47.957556   13752 command_runner.go:130] >   System UUID:                3e5a42d3-ea80-0c4d-ad18-4b76e4f3e22f
	I0612 15:03:47.957556   13752 command_runner.go:130] >   Boot ID:                    0efecf43-b070-4a8f-b542-4d1fd07306ad
	I0612 15:03:47.957601   13752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0612 15:03:47.957601   13752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0612 15:03:47.957601   13752 command_runner.go:130] >   Operating System:           linux
	I0612 15:03:47.957601   13752 command_runner.go:130] >   Architecture:               amd64
	I0612 15:03:47.957601   13752 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0612 15:03:47.957693   13752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0612 15:03:47.957693   13752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0612 15:03:47.957693   13752 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0612 15:03:47.957735   13752 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0612 15:03:47.957735   13752 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0612 15:03:47.957735   13752 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0612 15:03:47.957773   13752 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0612 15:03:47.957773   13752 command_runner.go:130] >   default                     busybox-fc5497c4f-45qqd                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0612 15:03:47.957814   13752 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-vgcxw                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     24m
	I0612 15:03:47.957814   13752 command_runner.go:130] >   kube-system                 etcd-multinode-025000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         76s
	I0612 15:03:47.957850   13752 command_runner.go:130] >   kube-system                 kindnet-bqlg8                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	I0612 15:03:47.957850   13752 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-025000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         76s
	I0612 15:03:47.957891   13752 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-025000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I0612 15:03:47.957927   13752 command_runner.go:130] >   kube-system                 kube-proxy-47lr8                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I0612 15:03:47.957927   13752 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-025000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I0612 15:03:47.957967   13752 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0612 15:03:47.957967   13752 command_runner.go:130] > Allocated resources:
	I0612 15:03:47.958003   13752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0612 15:03:47.958003   13752 command_runner.go:130] >   Resource           Requests     Limits
	I0612 15:03:47.958003   13752 command_runner.go:130] >   --------           --------     ------
	I0612 15:03:47.958043   13752 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0612 15:03:47.958043   13752 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0612 15:03:47.958043   13752 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0612 15:03:47.958078   13752 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0612 15:03:47.958078   13752 command_runner.go:130] > Events:
	I0612 15:03:47.958135   13752 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0612 15:03:47.958135   13752 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0612 15:03:47.958135   13752 command_runner.go:130] >   Normal  Starting                 24m                kube-proxy       
	I0612 15:03:47.958171   13752 command_runner.go:130] >   Normal  Starting                 74s                kube-proxy       
	I0612 15:03:47.958171   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node multinode-025000 status is now: NodeHasSufficientMemory
	I0612 15:03:47.958211   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node multinode-025000 status is now: NodeHasNoDiskPressure
	I0612 15:03:47.958239   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node multinode-025000 status is now: NodeHasSufficientPID
	I0612 15:03:47.958239   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:47.958239   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-025000 status is now: NodeHasNoDiskPressure
	I0612 15:03:47.958239   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-025000 status is now: NodeHasSufficientMemory
	I0612 15:03:47.958239   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-025000 status is now: NodeHasSufficientPID
	I0612 15:03:47.958239   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:47.958239   13752 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0612 15:03:47.958239   13752 command_runner.go:130] >   Normal  RegisteredNode           24m                node-controller  Node multinode-025000 event: Registered Node multinode-025000 in Controller
	I0612 15:03:47.958239   13752 command_runner.go:130] >   Normal  NodeReady                23m                kubelet          Node multinode-025000 status is now: NodeReady
	I0612 15:03:47.958239   13752 command_runner.go:130] >   Normal  Starting                 82s                kubelet          Starting kubelet.
	I0612 15:03:47.958239   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  82s (x8 over 82s)  kubelet          Node multinode-025000 status is now: NodeHasSufficientMemory
	I0612 15:03:47.958239   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    82s (x8 over 82s)  kubelet          Node multinode-025000 status is now: NodeHasNoDiskPressure
	I0612 15:03:47.958239   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     82s (x7 over 82s)  kubelet          Node multinode-025000 status is now: NodeHasSufficientPID
	I0612 15:03:47.958239   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  82s                kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:47.958239   13752 command_runner.go:130] >   Normal  RegisteredNode           63s                node-controller  Node multinode-025000 event: Registered Node multinode-025000 in Controller
	I0612 15:03:47.958239   13752 command_runner.go:130] > Name:               multinode-025000-m02
	I0612 15:03:47.958239   13752 command_runner.go:130] > Roles:              <none>
	I0612 15:03:47.958239   13752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0612 15:03:47.958239   13752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0612 15:03:47.958239   13752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0612 15:03:47.958239   13752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-025000-m02
	I0612 15:03:47.958239   13752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0612 15:03:47.958239   13752 command_runner.go:130] >                     minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97
	I0612 15:03:47.958239   13752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-025000
	I0612 15:03:47.958239   13752 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0612 15:03:47.958239   13752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_12T14_42_39_0700
	I0612 15:03:47.958239   13752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0612 15:03:47.958239   13752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0612 15:03:47.958239   13752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0612 15:03:47.958239   13752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0612 15:03:47.958239   13752 command_runner.go:130] > CreationTimestamp:  Wed, 12 Jun 2024 21:42:39 +0000
	I0612 15:03:47.958239   13752 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0612 15:03:47.958239   13752 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0612 15:03:47.958239   13752 command_runner.go:130] > Unschedulable:      false
	I0612 15:03:47.958239   13752 command_runner.go:130] > Lease:
	I0612 15:03:47.958239   13752 command_runner.go:130] >   HolderIdentity:  multinode-025000-m02
	I0612 15:03:47.958239   13752 command_runner.go:130] >   AcquireTime:     <unset>
	I0612 15:03:47.958239   13752 command_runner.go:130] >   RenewTime:       Wed, 12 Jun 2024 21:59:20 +0000
	I0612 15:03:47.958239   13752 command_runner.go:130] > Conditions:
	I0612 15:03:47.958762   13752 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0612 15:03:47.958762   13752 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0612 15:03:47.958820   13752 command_runner.go:130] >   MemoryPressure   Unknown   Wed, 12 Jun 2024 21:58:59 +0000   Wed, 12 Jun 2024 22:03:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:47.958820   13752 command_runner.go:130] >   DiskPressure     Unknown   Wed, 12 Jun 2024 21:58:59 +0000   Wed, 12 Jun 2024 22:03:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:47.958820   13752 command_runner.go:130] >   PIDPressure      Unknown   Wed, 12 Jun 2024 21:58:59 +0000   Wed, 12 Jun 2024 22:03:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:47.958820   13752 command_runner.go:130] >   Ready            Unknown   Wed, 12 Jun 2024 21:58:59 +0000   Wed, 12 Jun 2024 22:03:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:47.958820   13752 command_runner.go:130] > Addresses:
	I0612 15:03:47.958820   13752 command_runner.go:130] >   InternalIP:  172.23.196.105
	I0612 15:03:47.958820   13752 command_runner.go:130] >   Hostname:    multinode-025000-m02
	I0612 15:03:47.958820   13752 command_runner.go:130] > Capacity:
	I0612 15:03:47.958820   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:47.958820   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:47.958820   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:47.958820   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:47.958820   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:47.958820   13752 command_runner.go:130] > Allocatable:
	I0612 15:03:47.958820   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:47.958820   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:47.958820   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:47.958820   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:47.958820   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:47.958820   13752 command_runner.go:130] > System Info:
	I0612 15:03:47.958820   13752 command_runner.go:130] >   Machine ID:                 c11d7ff5518449f8bc8169a1fd7b0c4b
	I0612 15:03:47.958820   13752 command_runner.go:130] >   System UUID:                3b021c48-8479-f34c-83c2-77b944a77c5e
	I0612 15:03:47.958820   13752 command_runner.go:130] >   Boot ID:                    67e77c09-c6b2-4c01-b167-2481dd4a7a96
	I0612 15:03:47.958820   13752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0612 15:03:47.958820   13752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0612 15:03:47.958820   13752 command_runner.go:130] >   Operating System:           linux
	I0612 15:03:47.958820   13752 command_runner.go:130] >   Architecture:               amd64
	I0612 15:03:47.958820   13752 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0612 15:03:47.958820   13752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0612 15:03:47.958820   13752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0612 15:03:47.958820   13752 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0612 15:03:47.958820   13752 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0612 15:03:47.958820   13752 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0612 15:03:47.958820   13752 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0612 15:03:47.958820   13752 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0612 15:03:47.958820   13752 command_runner.go:130] >   default                     busybox-fc5497c4f-9bsls    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0612 15:03:47.958820   13752 command_runner.go:130] >   kube-system                 kindnet-v4cqk              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	I0612 15:03:47.958820   13752 command_runner.go:130] >   kube-system                 kube-proxy-tdcdp           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I0612 15:03:47.958820   13752 command_runner.go:130] > Allocated resources:
	I0612 15:03:47.958820   13752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0612 15:03:47.958820   13752 command_runner.go:130] >   Resource           Requests   Limits
	I0612 15:03:47.958820   13752 command_runner.go:130] >   --------           --------   ------
	I0612 15:03:47.958820   13752 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0612 15:03:47.958820   13752 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0612 15:03:47.958820   13752 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0612 15:03:47.958820   13752 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0612 15:03:47.958820   13752 command_runner.go:130] > Events:
	I0612 15:03:47.958820   13752 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0612 15:03:47.958820   13752 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0612 15:03:47.958820   13752 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0612 15:03:47.959359   13752 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-025000-m02 event: Registered Node multinode-025000-m02 in Controller
	I0612 15:03:47.959359   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-025000-m02 status is now: NodeHasSufficientMemory
	I0612 15:03:47.959359   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-025000-m02 status is now: NodeHasNoDiskPressure
	I0612 15:03:47.959359   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-025000-m02 status is now: NodeHasSufficientPID
	I0612 15:03:47.959456   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:47.959456   13752 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-025000-m02 status is now: NodeReady
	I0612 15:03:47.959498   13752 command_runner.go:130] >   Normal  RegisteredNode           63s                node-controller  Node multinode-025000-m02 event: Registered Node multinode-025000-m02 in Controller
	I0612 15:03:47.959526   13752 command_runner.go:130] >   Normal  NodeNotReady             23s                node-controller  Node multinode-025000-m02 status is now: NodeNotReady
	I0612 15:03:47.959526   13752 command_runner.go:130] > Name:               multinode-025000-m03
	I0612 15:03:47.959566   13752 command_runner.go:130] > Roles:              <none>
	I0612 15:03:47.959566   13752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0612 15:03:47.959566   13752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0612 15:03:47.959566   13752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0612 15:03:47.959566   13752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-025000-m03
	I0612 15:03:47.959566   13752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0612 15:03:47.959639   13752 command_runner.go:130] >                     minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97
	I0612 15:03:47.959639   13752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-025000
	I0612 15:03:47.959682   13752 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0612 15:03:47.959682   13752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_12T14_57_59_0700
	I0612 15:03:47.959682   13752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0612 15:03:47.959742   13752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0612 15:03:47.959742   13752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0612 15:03:47.959781   13752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0612 15:03:47.959781   13752 command_runner.go:130] > CreationTimestamp:  Wed, 12 Jun 2024 21:57:58 +0000
	I0612 15:03:47.959781   13752 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0612 15:03:47.959861   13752 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0612 15:03:47.959861   13752 command_runner.go:130] > Unschedulable:      false
	I0612 15:03:47.959861   13752 command_runner.go:130] > Lease:
	I0612 15:03:47.959903   13752 command_runner.go:130] >   HolderIdentity:  multinode-025000-m03
	I0612 15:03:47.959903   13752 command_runner.go:130] >   AcquireTime:     <unset>
	I0612 15:03:47.959903   13752 command_runner.go:130] >   RenewTime:       Wed, 12 Jun 2024 21:59:00 +0000
	I0612 15:03:47.959903   13752 command_runner.go:130] > Conditions:
	I0612 15:03:47.959903   13752 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0612 15:03:47.959903   13752 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0612 15:03:47.959903   13752 command_runner.go:130] >   MemoryPressure   Unknown   Wed, 12 Jun 2024 21:58:06 +0000   Wed, 12 Jun 2024 21:59:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:47.959903   13752 command_runner.go:130] >   DiskPressure     Unknown   Wed, 12 Jun 2024 21:58:06 +0000   Wed, 12 Jun 2024 21:59:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:47.959903   13752 command_runner.go:130] >   PIDPressure      Unknown   Wed, 12 Jun 2024 21:58:06 +0000   Wed, 12 Jun 2024 21:59:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:47.959903   13752 command_runner.go:130] >   Ready            Unknown   Wed, 12 Jun 2024 21:58:06 +0000   Wed, 12 Jun 2024 21:59:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:47.959903   13752 command_runner.go:130] > Addresses:
	I0612 15:03:47.959903   13752 command_runner.go:130] >   InternalIP:  172.23.206.72
	I0612 15:03:47.959903   13752 command_runner.go:130] >   Hostname:    multinode-025000-m03
	I0612 15:03:47.959903   13752 command_runner.go:130] > Capacity:
	I0612 15:03:47.959903   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:47.959903   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:47.959903   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:47.959903   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:47.959903   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:47.959903   13752 command_runner.go:130] > Allocatable:
	I0612 15:03:47.959903   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:47.959903   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:47.959903   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:47.959903   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:47.959903   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:47.959903   13752 command_runner.go:130] > System Info:
	I0612 15:03:47.959903   13752 command_runner.go:130] >   Machine ID:                 b62d5e6740dc42d880d6595ac7dd57ae
	I0612 15:03:47.959903   13752 command_runner.go:130] >   System UUID:                31a13a9b-b7c6-6643-8352-fb322079216a
	I0612 15:03:47.959903   13752 command_runner.go:130] >   Boot ID:                    a21b9eff-2471-4589-9e35-5845aae64358
	I0612 15:03:47.959903   13752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0612 15:03:47.959903   13752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0612 15:03:47.959903   13752 command_runner.go:130] >   Operating System:           linux
	I0612 15:03:47.959903   13752 command_runner.go:130] >   Architecture:               amd64
	I0612 15:03:47.959903   13752 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0612 15:03:47.959903   13752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0612 15:03:47.959903   13752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0612 15:03:47.959903   13752 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0612 15:03:47.959903   13752 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0612 15:03:47.959903   13752 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0612 15:03:47.959903   13752 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0612 15:03:47.959903   13752 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0612 15:03:47.959903   13752 command_runner.go:130] >   kube-system                 kindnet-8252q       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I0612 15:03:47.959903   13752 command_runner.go:130] >   kube-system                 kube-proxy-7jwdg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I0612 15:03:47.960458   13752 command_runner.go:130] > Allocated resources:
	I0612 15:03:47.960458   13752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0612 15:03:47.960458   13752 command_runner.go:130] >   Resource           Requests   Limits
	I0612 15:03:47.960458   13752 command_runner.go:130] >   --------           --------   ------
	I0612 15:03:47.960458   13752 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0612 15:03:47.960458   13752 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0612 15:03:47.960458   13752 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0612 15:03:47.960458   13752 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0612 15:03:47.960458   13752 command_runner.go:130] > Events:
	I0612 15:03:47.960458   13752 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0612 15:03:47.960609   13752 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0612 15:03:47.960609   13752 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0612 15:03:47.960628   13752 command_runner.go:130] >   Normal  Starting                 5m46s                  kube-proxy       
	I0612 15:03:47.960628   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:47.960628   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-025000-m03 status is now: NodeHasSufficientMemory
	I0612 15:03:47.960698   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-025000-m03 status is now: NodeHasNoDiskPressure
	I0612 15:03:47.960698   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-025000-m03 status is now: NodeHasSufficientPID
	I0612 15:03:47.960698   13752 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-025000-m03 status is now: NodeReady
	I0612 15:03:47.960758   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m49s (x2 over 5m49s)  kubelet          Node multinode-025000-m03 status is now: NodeHasSufficientMemory
	I0612 15:03:47.960780   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m49s (x2 over 5m49s)  kubelet          Node multinode-025000-m03 status is now: NodeHasNoDiskPressure
	I0612 15:03:47.960808   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m49s (x2 over 5m49s)  kubelet          Node multinode-025000-m03 status is now: NodeHasSufficientPID
	I0612 15:03:47.960808   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m49s                  kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:47.960808   13752 command_runner.go:130] >   Normal  RegisteredNode           5m48s                  node-controller  Node multinode-025000-m03 event: Registered Node multinode-025000-m03 in Controller
	I0612 15:03:47.960808   13752 command_runner.go:130] >   Normal  NodeReady                5m41s                  kubelet          Node multinode-025000-m03 status is now: NodeReady
	I0612 15:03:47.960808   13752 command_runner.go:130] >   Normal  NodeNotReady             4m2s                   node-controller  Node multinode-025000-m03 status is now: NodeNotReady
	I0612 15:03:47.960808   13752 command_runner.go:130] >   Normal  RegisteredNode           63s                    node-controller  Node multinode-025000-m03 event: Registered Node multinode-025000-m03 in Controller
	I0612 15:03:47.971911   13752 logs.go:123] Gathering logs for coredns [26e5daf354e3] ...
	I0612 15:03:47.971911   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26e5daf354e3"
	I0612 15:03:48.001521   13752 command_runner.go:130] > .:53
	I0612 15:03:48.001615   13752 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 9f7dc1bade6b5769fb289c890c4bc60268e74645c2ad6eb7d326d3f775fd92cb51f1ac39274894772e6760c31275de0003978af82f0f289ef8d45827e8140e48
	I0612 15:03:48.001666   13752 command_runner.go:130] > CoreDNS-1.11.1
	I0612 15:03:48.001666   13752 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0612 15:03:48.001666   13752 command_runner.go:130] > [INFO] 127.0.0.1:54952 - 9035 "HINFO IN 225709527310201015.7757756956422223857. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.039110892s
	I0612 15:03:48.001931   13752 logs.go:123] Gathering logs for coredns [e83cf4eef49e] ...
	I0612 15:03:48.001931   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83cf4eef49e"
	I0612 15:03:48.030949   13752 command_runner.go:130] > .:53
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 9f7dc1bade6b5769fb289c890c4bc60268e74645c2ad6eb7d326d3f775fd92cb51f1ac39274894772e6760c31275de0003978af82f0f289ef8d45827e8140e48
	I0612 15:03:48.034281   13752 command_runner.go:130] > CoreDNS-1.11.1
	I0612 15:03:48.034281   13752 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 127.0.0.1:53490 - 39118 "HINFO IN 4677201826540465335.2322207397622737457. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.048277073s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:49256 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000267302s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:54623 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.08558s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:51804 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.048771085s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:53027 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.100151983s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:34534 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001199s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:44985 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000141701s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:54544 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.0000543s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:55517 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000123601s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:42995 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099501s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:51839 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.135718274s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:52123 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000304602s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:36740 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000274801s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:48333 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.003287018s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:55754 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000962s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:51695 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000224102s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:49605 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096301s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:37746 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000283001s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:54995 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000106501s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:49201 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077401s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:60577 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077201s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:36057 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000107301s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:43898 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:49177 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091201s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:45207 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000584s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:36676 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151001s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:60305 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000305802s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:37468 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000209201s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:34743 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125201s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:45035 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000240801s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:42306 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000309601s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:36509 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000152901s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:55614 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000545s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:39195 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130301s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:34618 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000272902s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:44444 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000177201s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:35691 - 5 "PTR IN 1.192.23.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001307s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:51174 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110501s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:41925 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000207401s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:44306 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000736s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:46158 - 5 "PTR IN 1.192.23.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000547s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0612 15:03:50.538167   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods
	I0612 15:03:50.538424   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:50.538424   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:50.538424   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:50.543025   13752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 15:03:50.543816   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:50.543816   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:50.543816   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:50.543816   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:50.543859   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:50.543859   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:50 GMT
	I0612 15:03:50.543859   13752 round_trippers.go:580]     Audit-Id: 20076492-16ea-4c7d-80f5-0f9ff68b238a
	I0612 15:03:50.546067   13752 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1991"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1975","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86610 chars]
	I0612 15:03:50.550031   13752 system_pods.go:59] 12 kube-system pods found
	I0612 15:03:50.550031   13752 system_pods.go:61] "coredns-7db6d8ff4d-vgcxw" [c5bd143a-d39e-46af-9308-0a97bb45729c] Running
	I0612 15:03:50.550031   13752 system_pods.go:61] "etcd-multinode-025000" [be41c4a6-88ce-4e08-9b7c-16c0b4f3eec2] Running
	I0612 15:03:50.550031   13752 system_pods.go:61] "kindnet-8252q" [b1c2b9b3-0fd6-4393-b818-e7e823f89acc] Running
	I0612 15:03:50.550031   13752 system_pods.go:61] "kindnet-bqlg8" [1f004a05-3f5f-444b-9ac0-88f0e23da904] Running
	I0612 15:03:50.550031   13752 system_pods.go:61] "kindnet-v4cqk" [31faf6fc-5371-4f19-b71f-0a41b6dd2f79] Running
	I0612 15:03:50.550031   13752 system_pods.go:61] "kube-apiserver-multinode-025000" [63e55411-d432-4e5a-becc-fae0887fecae] Running
	I0612 15:03:50.550031   13752 system_pods.go:61] "kube-controller-manager-multinode-025000" [68c9aa4f-49ee-439c-ad51-7943e65c0085] Running
	I0612 15:03:50.550031   13752 system_pods.go:61] "kube-proxy-47lr8" [10b24fa7-8eea-4fbb-ab18-404e853aa7ab] Running
	I0612 15:03:50.550031   13752 system_pods.go:61] "kube-proxy-7jwdg" [643030f7-b876-4243-bacc-04205e88cc9e] Running
	I0612 15:03:50.550031   13752 system_pods.go:61] "kube-proxy-tdcdp" [b623833c-ce55-46b1-a840-99b3143adac1] Running
	I0612 15:03:50.550031   13752 system_pods.go:61] "kube-scheduler-multinode-025000" [83b272cb-1286-47d8-bcb1-a66056dff2a5] Running
	I0612 15:03:50.550031   13752 system_pods.go:61] "storage-provisioner" [d20f7489-1aa1-44b8-9221-4d1849884be4] Running
	I0612 15:03:50.550031   13752 system_pods.go:74] duration metric: took 3.7032455s to wait for pod list to return data ...
	I0612 15:03:50.550031   13752 default_sa.go:34] waiting for default service account to be created ...
	I0612 15:03:50.550031   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/default/serviceaccounts
	I0612 15:03:50.550615   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:50.550615   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:50.550615   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:50.553838   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:03:50.553838   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:50.553838   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:50.553838   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:50.553838   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:50.553838   13752 round_trippers.go:580]     Content-Length: 262
	I0612 15:03:50.553838   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:50 GMT
	I0612 15:03:50.553838   13752 round_trippers.go:580]     Audit-Id: 202f821f-e89b-4e4d-b971-1caa3bb2ae61
	I0612 15:03:50.553838   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:50.553838   13752 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1991"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"876e1679-16ec-44bf-9460-cce6ea3acbf0","resourceVersion":"355","creationTimestamp":"2024-06-12T21:39:45Z"}}]}
	I0612 15:03:50.554578   13752 default_sa.go:45] found service account: "default"
	I0612 15:03:50.554602   13752 default_sa.go:55] duration metric: took 4.5712ms for default service account to be created ...
	I0612 15:03:50.554602   13752 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 15:03:50.554720   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods
	I0612 15:03:50.554720   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:50.554720   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:50.554720   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:50.557111   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:03:50.557111   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:50.557111   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:50.557111   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:50.557111   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:50 GMT
	I0612 15:03:50.557111   13752 round_trippers.go:580]     Audit-Id: 0f43bd1a-277f-471c-b3f3-7b6b2e3218b1
	I0612 15:03:50.557111   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:50.557111   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:50.561281   13752 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1991"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1975","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86610 chars]
	I0612 15:03:50.565895   13752 system_pods.go:86] 12 kube-system pods found
	I0612 15:03:50.565895   13752 system_pods.go:89] "coredns-7db6d8ff4d-vgcxw" [c5bd143a-d39e-46af-9308-0a97bb45729c] Running
	I0612 15:03:50.565895   13752 system_pods.go:89] "etcd-multinode-025000" [be41c4a6-88ce-4e08-9b7c-16c0b4f3eec2] Running
	I0612 15:03:50.565988   13752 system_pods.go:89] "kindnet-8252q" [b1c2b9b3-0fd6-4393-b818-e7e823f89acc] Running
	I0612 15:03:50.565988   13752 system_pods.go:89] "kindnet-bqlg8" [1f004a05-3f5f-444b-9ac0-88f0e23da904] Running
	I0612 15:03:50.565988   13752 system_pods.go:89] "kindnet-v4cqk" [31faf6fc-5371-4f19-b71f-0a41b6dd2f79] Running
	I0612 15:03:50.565988   13752 system_pods.go:89] "kube-apiserver-multinode-025000" [63e55411-d432-4e5a-becc-fae0887fecae] Running
	I0612 15:03:50.565988   13752 system_pods.go:89] "kube-controller-manager-multinode-025000" [68c9aa4f-49ee-439c-ad51-7943e65c0085] Running
	I0612 15:03:50.565988   13752 system_pods.go:89] "kube-proxy-47lr8" [10b24fa7-8eea-4fbb-ab18-404e853aa7ab] Running
	I0612 15:03:50.565988   13752 system_pods.go:89] "kube-proxy-7jwdg" [643030f7-b876-4243-bacc-04205e88cc9e] Running
	I0612 15:03:50.566109   13752 system_pods.go:89] "kube-proxy-tdcdp" [b623833c-ce55-46b1-a840-99b3143adac1] Running
	I0612 15:03:50.566109   13752 system_pods.go:89] "kube-scheduler-multinode-025000" [83b272cb-1286-47d8-bcb1-a66056dff2a5] Running
	I0612 15:03:50.566109   13752 system_pods.go:89] "storage-provisioner" [d20f7489-1aa1-44b8-9221-4d1849884be4] Running
	I0612 15:03:50.566145   13752 system_pods.go:126] duration metric: took 11.4229ms to wait for k8s-apps to be running ...
	I0612 15:03:50.566145   13752 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 15:03:50.586056   13752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 15:03:50.605851   13752 system_svc.go:56] duration metric: took 39.7055ms WaitForService to wait for kubelet
	I0612 15:03:50.605851   13752 kubeadm.go:576] duration metric: took 1m14.7841386s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 15:03:50.605851   13752 node_conditions.go:102] verifying NodePressure condition ...
	I0612 15:03:50.613058   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes
	I0612 15:03:50.613139   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:50.613139   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:50.613209   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:50.613438   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:50.613438   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:50.613438   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:50.613438   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:50 GMT
	I0612 15:03:50.613438   13752 round_trippers.go:580]     Audit-Id: f0433259-994d-465d-87b3-9f02e99a7845
	I0612 15:03:50.618051   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:50.618051   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:50.618051   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:50.618598   13752 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1991"},"items":[{"metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16259 chars]
	I0612 15:03:50.619678   13752 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 15:03:50.619734   13752 node_conditions.go:123] node cpu capacity is 2
	I0612 15:03:50.619734   13752 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 15:03:50.619812   13752 node_conditions.go:123] node cpu capacity is 2
	I0612 15:03:50.619812   13752 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 15:03:50.619812   13752 node_conditions.go:123] node cpu capacity is 2
	I0612 15:03:50.619812   13752 node_conditions.go:105] duration metric: took 13.9615ms to run NodePressure ...
	I0612 15:03:50.619812   13752 start.go:240] waiting for startup goroutines ...
	I0612 15:03:50.619812   13752 start.go:245] waiting for cluster config update ...
	I0612 15:03:50.619886   13752 start.go:254] writing updated cluster config ...
	I0612 15:03:50.624338   13752 out.go:177] 
	I0612 15:03:50.630612   13752 config.go:182] Loaded profile config "ha-957600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 15:03:50.639807   13752 config.go:182] Loaded profile config "multinode-025000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 15:03:50.639807   13752 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\config.json ...
	I0612 15:03:50.648108   13752 out.go:177] * Starting "multinode-025000-m02" worker node in "multinode-025000" cluster
	I0612 15:03:50.648108   13752 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0612 15:03:50.648108   13752 cache.go:56] Caching tarball of preloaded images
	I0612 15:03:50.648108   13752 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0612 15:03:50.648108   13752 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0612 15:03:50.651280   13752 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\config.json ...
	I0612 15:03:50.653529   13752 start.go:360] acquireMachinesLock for multinode-025000-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 15:03:50.653529   13752 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-025000-m02"
	I0612 15:03:50.653529   13752 start.go:96] Skipping create...Using existing machine configuration
	I0612 15:03:50.653529   13752 fix.go:54] fixHost starting: m02
	I0612 15:03:50.654779   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:03:52.788560   13752 main.go:141] libmachine: [stdout =====>] : Off
	
	I0612 15:03:52.790100   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:03:52.790100   13752 fix.go:112] recreateIfNeeded on multinode-025000-m02: state=Stopped err=<nil>
	W0612 15:03:52.790100   13752 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 15:03:52.794283   13752 out.go:177] * Restarting existing hyperv VM for "multinode-025000-m02" ...
	I0612 15:03:52.797091   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-025000-m02
	I0612 15:03:55.756023   13752 main.go:141] libmachine: [stdout =====>] : 
	I0612 15:03:55.757242   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:03:55.757242   13752 main.go:141] libmachine: Waiting for host to start...
	I0612 15:03:55.757242   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:03:57.928593   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:03:57.928593   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:03:57.938810   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:04:00.364669   13752 main.go:141] libmachine: [stdout =====>] : 
	I0612 15:04:00.371455   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:01.383557   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:04:03.477291   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:04:03.477291   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:03.486274   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:04:05.950061   13752 main.go:141] libmachine: [stdout =====>] : 
	I0612 15:04:05.950061   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:06.967347   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:04:09.141470   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:04:09.141470   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:09.141627   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:04:11.617990   13752 main.go:141] libmachine: [stdout =====>] : 
	I0612 15:04:11.617990   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:12.621262   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:04:14.844671   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:04:14.844744   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:14.844810   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:04:17.322154   13752 main.go:141] libmachine: [stdout =====>] : 
	I0612 15:04:17.324672   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:18.334047   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:04:20.542750   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:04:20.542750   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:20.542750   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:04:23.012398   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:04:23.022322   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:23.025123   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:04:25.102306   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:04:25.104777   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:25.104832   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:04:27.713960   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:04:27.713960   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:27.726069   13752 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\config.json ...
	I0612 15:04:27.728861   13752 machine.go:94] provisionDockerMachine start ...
	I0612 15:04:27.728861   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:04:29.923353   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:04:29.923353   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:29.936170   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:04:32.366390   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:04:32.366390   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:32.383850   13752 main.go:141] libmachine: Using SSH client type: native
	I0612 15:04:32.383987   13752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.204.132 22 <nil> <nil>}
	I0612 15:04:32.383987   13752 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 15:04:32.513468   13752 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 15:04:32.513468   13752 buildroot.go:166] provisioning hostname "multinode-025000-m02"
	I0612 15:04:32.513468   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:04:34.586891   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:04:34.593830   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:34.593830   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:04:37.047855   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:04:37.047855   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:37.064835   13752 main.go:141] libmachine: Using SSH client type: native
	I0612 15:04:37.065616   13752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.204.132 22 <nil> <nil>}
	I0612 15:04:37.065616   13752 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-025000-m02 && echo "multinode-025000-m02" | sudo tee /etc/hostname
	I0612 15:04:37.219666   13752 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-025000-m02
	
	I0612 15:04:37.219794   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:04:39.271246   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:04:39.279675   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:39.279675   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:04:41.755052   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:04:41.755052   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:41.770728   13752 main.go:141] libmachine: Using SSH client type: native
	I0612 15:04:41.771339   13752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.204.132 22 <nil> <nil>}
	I0612 15:04:41.771412   13752 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-025000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-025000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-025000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 15:04:41.918296   13752 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 15:04:41.918296   13752 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0612 15:04:41.918412   13752 buildroot.go:174] setting up certificates
	I0612 15:04:41.918412   13752 provision.go:84] configureAuth start
	I0612 15:04:41.918510   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:04:43.987317   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:04:43.998817   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:43.998817   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:04:46.536778   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:04:46.536778   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:46.536778   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:04:48.576821   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:04:48.576821   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:48.576821   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:04:51.030137   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:04:51.030137   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:51.030137   13752 provision.go:143] copyHostCerts
	I0612 15:04:51.032373   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0612 15:04:51.032827   13752 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0612 15:04:51.032827   13752 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0612 15:04:51.033417   13752 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0612 15:04:51.034697   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0612 15:04:51.035010   13752 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0612 15:04:51.035010   13752 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0612 15:04:51.035269   13752 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0612 15:04:51.035600   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0612 15:04:51.036532   13752 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0612 15:04:51.036532   13752 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0612 15:04:51.036715   13752 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0612 15:04:51.037184   13752 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-025000-m02 san=[127.0.0.1 172.23.204.132 localhost minikube multinode-025000-m02]
	I0612 15:04:51.294999   13752 provision.go:177] copyRemoteCerts
	I0612 15:04:51.316836   13752 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 15:04:51.316836   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:04:53.379898   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:04:53.391252   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:53.391252   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:04:55.855747   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:04:55.855747   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:55.867123   13752 sshutil.go:53] new ssh client: &{IP:172.23.204.132 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000-m02\id_rsa Username:docker}
	I0612 15:04:55.964974   13752 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6481227s)
	I0612 15:04:55.965098   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0612 15:04:55.965581   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 15:04:56.009622   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0612 15:04:56.010097   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0612 15:04:56.053102   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0612 15:04:56.055574   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0612 15:04:56.102614   13752 provision.go:87] duration metric: took 14.1841548s to configureAuth
	I0612 15:04:56.102734   13752 buildroot.go:189] setting minikube options for container-runtime
	I0612 15:04:56.103379   13752 config.go:182] Loaded profile config "multinode-025000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 15:04:56.103443   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:04:58.138283   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:04:58.149275   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:58.149364   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:05:00.573468   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:05:00.573468   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:00.589848   13752 main.go:141] libmachine: Using SSH client type: native
	I0612 15:05:00.590045   13752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.204.132 22 <nil> <nil>}
	I0612 15:05:00.590045   13752 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0612 15:05:00.717121   13752 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0612 15:05:00.717121   13752 buildroot.go:70] root file system type: tmpfs
	I0612 15:05:00.717412   13752 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0612 15:05:00.717412   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:05:02.743742   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:05:02.755872   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:02.755872   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:05:05.160289   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:05:05.160289   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:05.175954   13752 main.go:141] libmachine: Using SSH client type: native
	I0612 15:05:05.176972   13752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.204.132 22 <nil> <nil>}
	I0612 15:05:05.177060   13752 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.23.200.184"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0612 15:05:05.334048   13752 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.23.200.184
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0612 15:05:05.334201   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:05:07.403075   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:05:07.403075   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:07.413954   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:05:09.886807   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:05:09.886807   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:09.904696   13752 main.go:141] libmachine: Using SSH client type: native
	I0612 15:05:09.905232   13752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.204.132 22 <nil> <nil>}
	I0612 15:05:09.905232   13752 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0612 15:05:12.185858   13752 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0612 15:05:12.185858   13752 machine.go:97] duration metric: took 44.4568496s to provisionDockerMachine
	I0612 15:05:12.185858   13752 start.go:293] postStartSetup for "multinode-025000-m02" (driver="hyperv")
	I0612 15:05:12.185858   13752 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 15:05:12.196892   13752 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 15:05:12.196892   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:05:14.297058   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:05:14.297058   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:14.309152   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:05:16.758401   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:05:16.769950   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:16.770039   13752 sshutil.go:53] new ssh client: &{IP:172.23.204.132 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000-m02\id_rsa Username:docker}
	I0612 15:05:16.883549   13752 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6866412s)
	I0612 15:05:16.894999   13752 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 15:05:16.902602   13752 command_runner.go:130] > NAME=Buildroot
	I0612 15:05:16.902602   13752 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0612 15:05:16.902602   13752 command_runner.go:130] > ID=buildroot
	I0612 15:05:16.902602   13752 command_runner.go:130] > VERSION_ID=2023.02.9
	I0612 15:05:16.902602   13752 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0612 15:05:16.902602   13752 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 15:05:16.902602   13752 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0612 15:05:16.903363   13752 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0612 15:05:16.904464   13752 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> 12802.pem in /etc/ssl/certs
	I0612 15:05:16.904464   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> /etc/ssl/certs/12802.pem
	I0612 15:05:16.915381   13752 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 15:05:16.936106   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem --> /etc/ssl/certs/12802.pem (1708 bytes)
	I0612 15:05:16.979307   13752 start.go:296] duration metric: took 4.7934332s for postStartSetup
	I0612 15:05:16.979333   13752 fix.go:56] duration metric: took 1m26.3255193s for fixHost
	I0612 15:05:16.979333   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:05:19.039892   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:05:19.050914   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:19.050914   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:05:21.510031   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:05:21.521336   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:21.528212   13752 main.go:141] libmachine: Using SSH client type: native
	I0612 15:05:21.528778   13752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.204.132 22 <nil> <nil>}
	I0612 15:05:21.528860   13752 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0612 15:05:21.659683   13752 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718229921.655053520
	
	I0612 15:05:21.659683   13752 fix.go:216] guest clock: 1718229921.655053520
	I0612 15:05:21.659683   13752 fix.go:229] Guest: 2024-06-12 15:05:21.65505352 -0700 PDT Remote: 2024-06-12 15:05:16.9793333 -0700 PDT m=+294.041716601 (delta=4.67572022s)
	I0612 15:05:21.659854   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:05:23.744338   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:05:23.757408   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:23.757408   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:05:26.193091   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:05:26.193091   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:26.210278   13752 main.go:141] libmachine: Using SSH client type: native
	I0612 15:05:26.210766   13752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.204.132 22 <nil> <nil>}
	I0612 15:05:26.210766   13752 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1718229921
	I0612 15:05:26.356668   13752 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jun 12 22:05:21 UTC 2024
	
	I0612 15:05:26.356668   13752 fix.go:236] clock set: Wed Jun 12 22:05:21 UTC 2024
	 (err=<nil>)
	I0612 15:05:26.356668   13752 start.go:83] releasing machines lock for "multinode-025000-m02", held for 1m35.7028233s
	I0612 15:05:26.356668   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:05:28.463909   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:05:28.463909   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:28.475301   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:05:31.107427   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:05:31.107502   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:31.110144   13752 out.go:177] * Found network options:
	I0612 15:05:31.113248   13752 out.go:177]   - NO_PROXY=172.23.200.184
	W0612 15:05:31.115585   13752 proxy.go:119] fail to check proxy env: Error ip not in block
	I0612 15:05:31.117982   13752 out.go:177]   - NO_PROXY=172.23.200.184
	W0612 15:05:31.120848   13752 proxy.go:119] fail to check proxy env: Error ip not in block
	W0612 15:05:31.123156   13752 proxy.go:119] fail to check proxy env: Error ip not in block
	I0612 15:05:31.126385   13752 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 15:05:31.126385   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:05:31.137186   13752 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0612 15:05:31.137186   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:05:33.410239   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:05:33.410344   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:33.410344   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:05:33.410344   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:05:33.410344   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:33.410344   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:05:36.072617   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:05:36.085997   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:36.086183   13752 sshutil.go:53] new ssh client: &{IP:172.23.204.132 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000-m02\id_rsa Username:docker}
	I0612 15:05:36.110051   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:05:36.110108   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:36.110108   13752 sshutil.go:53] new ssh client: &{IP:172.23.204.132 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000-m02\id_rsa Username:docker}
	I0612 15:05:36.239650   13752 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0612 15:05:36.239650   13752 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.113248s)
	I0612 15:05:36.239650   13752 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0612 15:05:36.239650   13752 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1024472s)
	W0612 15:05:36.239650   13752 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 15:05:36.250503   13752 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 15:05:36.294696   13752 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0612 15:05:36.294696   13752 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 15:05:36.294696   13752 start.go:494] detecting cgroup driver to use...
	I0612 15:05:36.294696   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 15:05:36.331076   13752 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0612 15:05:36.343247   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0612 15:05:36.379586   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0612 15:05:36.403323   13752 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0612 15:05:36.414297   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0612 15:05:36.447987   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0612 15:05:36.481550   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0612 15:05:36.512594   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0612 15:05:36.550090   13752 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 15:05:36.586911   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0612 15:05:36.617435   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0612 15:05:36.649684   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0612 15:05:36.686014   13752 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 15:05:36.706594   13752 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0612 15:05:36.718254   13752 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 15:05:36.747992   13752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 15:05:36.940371   13752 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0612 15:05:36.974252   13752 start.go:494] detecting cgroup driver to use...
	I0612 15:05:36.986707   13752 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0612 15:05:37.012971   13752 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0612 15:05:37.012971   13752 command_runner.go:130] > [Unit]
	I0612 15:05:37.013084   13752 command_runner.go:130] > Description=Docker Application Container Engine
	I0612 15:05:37.013084   13752 command_runner.go:130] > Documentation=https://docs.docker.com
	I0612 15:05:37.013152   13752 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0612 15:05:37.013152   13752 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0612 15:05:37.013220   13752 command_runner.go:130] > StartLimitBurst=3
	I0612 15:05:37.013274   13752 command_runner.go:130] > StartLimitIntervalSec=60
	I0612 15:05:37.013274   13752 command_runner.go:130] > [Service]
	I0612 15:05:37.013274   13752 command_runner.go:130] > Type=notify
	I0612 15:05:37.013314   13752 command_runner.go:130] > Restart=on-failure
	I0612 15:05:37.013350   13752 command_runner.go:130] > Environment=NO_PROXY=172.23.200.184
	I0612 15:05:37.013350   13752 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0612 15:05:37.013388   13752 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0612 15:05:37.013424   13752 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0612 15:05:37.013478   13752 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0612 15:05:37.013478   13752 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0612 15:05:37.013512   13752 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0612 15:05:37.013549   13752 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0612 15:05:37.013549   13752 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0612 15:05:37.013549   13752 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0612 15:05:37.013549   13752 command_runner.go:130] > ExecStart=
	I0612 15:05:37.013549   13752 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0612 15:05:37.013549   13752 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0612 15:05:37.013695   13752 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0612 15:05:37.013695   13752 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0612 15:05:37.013731   13752 command_runner.go:130] > LimitNOFILE=infinity
	I0612 15:05:37.013731   13752 command_runner.go:130] > LimitNPROC=infinity
	I0612 15:05:37.013731   13752 command_runner.go:130] > LimitCORE=infinity
	I0612 15:05:37.013731   13752 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0612 15:05:37.013731   13752 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0612 15:05:37.013731   13752 command_runner.go:130] > TasksMax=infinity
	I0612 15:05:37.013731   13752 command_runner.go:130] > TimeoutStartSec=0
	I0612 15:05:37.013731   13752 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0612 15:05:37.013731   13752 command_runner.go:130] > Delegate=yes
	I0612 15:05:37.013731   13752 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0612 15:05:37.013731   13752 command_runner.go:130] > KillMode=process
	I0612 15:05:37.013731   13752 command_runner.go:130] > [Install]
	I0612 15:05:37.013731   13752 command_runner.go:130] > WantedBy=multi-user.target
	I0612 15:05:37.025273   13752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 15:05:37.059852   13752 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 15:05:37.100371   13752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 15:05:37.138018   13752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0612 15:05:37.175943   13752 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0612 15:05:37.243461   13752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0612 15:05:37.268953   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 15:05:37.302431   13752 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0612 15:05:37.316754   13752 ssh_runner.go:195] Run: which cri-dockerd
	I0612 15:05:37.320876   13752 command_runner.go:130] > /usr/bin/cri-dockerd
	I0612 15:05:37.338790   13752 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0612 15:05:37.358517   13752 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0612 15:05:37.403563   13752 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0612 15:05:37.590876   13752 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0612 15:05:37.776758   13752 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0612 15:05:37.777034   13752 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0612 15:05:37.823278   13752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 15:05:38.031246   13752 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0612 15:05:40.647681   13752 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6163867s)
	I0612 15:05:40.659388   13752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0612 15:05:40.700003   13752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0612 15:05:40.738723   13752 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0612 15:05:40.945882   13752 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0612 15:05:41.136425   13752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 15:05:41.327495   13752 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0612 15:05:41.372148   13752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0612 15:05:41.414469   13752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 15:05:41.603576   13752 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0612 15:05:41.712870   13752 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0612 15:05:41.726379   13752 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0612 15:05:41.729851   13752 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0612 15:05:41.729851   13752 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0612 15:05:41.729851   13752 command_runner.go:130] > Device: 0,22	Inode: 846         Links: 1
	I0612 15:05:41.729851   13752 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0612 15:05:41.729851   13752 command_runner.go:130] > Access: 2024-06-12 22:05:41.624113268 +0000
	I0612 15:05:41.729851   13752 command_runner.go:130] > Modify: 2024-06-12 22:05:41.624113268 +0000
	I0612 15:05:41.729851   13752 command_runner.go:130] > Change: 2024-06-12 22:05:41.629113300 +0000
	I0612 15:05:41.729851   13752 command_runner.go:130] >  Birth: -
	I0612 15:05:41.729851   13752 start.go:562] Will wait 60s for crictl version
	I0612 15:05:41.755436   13752 ssh_runner.go:195] Run: which crictl
	I0612 15:05:41.761991   13752 command_runner.go:130] > /usr/bin/crictl
	I0612 15:05:41.774612   13752 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 15:05:41.833501   13752 command_runner.go:130] > Version:  0.1.0
	I0612 15:05:41.833501   13752 command_runner.go:130] > RuntimeName:  docker
	I0612 15:05:41.833501   13752 command_runner.go:130] > RuntimeVersion:  26.1.4
	I0612 15:05:41.833501   13752 command_runner.go:130] > RuntimeApiVersion:  v1
	I0612 15:05:41.833501   13752 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0612 15:05:41.847894   13752 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0612 15:05:41.873725   13752 command_runner.go:130] > 26.1.4
	I0612 15:05:41.896589   13752 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0612 15:05:41.932241   13752 command_runner.go:130] > 26.1.4
	I0612 15:05:41.937251   13752 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0612 15:05:41.939322   13752 out.go:177]   - env NO_PROXY=172.23.200.184
	I0612 15:05:41.942089   13752 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0612 15:05:41.946227   13752 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0612 15:05:41.946227   13752 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0612 15:05:41.946227   13752 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0612 15:05:41.946227   13752 ip.go:207] Found interface: {Index:16 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:56:a0:18 Flags:up|broadcast|multicast|running}
	I0612 15:05:41.949205   13752 ip.go:210] interface addr: fe80::52c5:dd8:dd1e:a400/64
	I0612 15:05:41.949205   13752 ip.go:210] interface addr: 172.23.192.1/20
	I0612 15:05:41.962502   13752 ssh_runner.go:195] Run: grep 172.23.192.1	host.minikube.internal$ /etc/hosts
	I0612 15:05:41.969716   13752 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 15:05:41.992291   13752 mustload.go:65] Loading cluster: multinode-025000
	I0612 15:05:41.993104   13752 config.go:182] Loaded profile config "multinode-025000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 15:05:41.993342   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:05:44.195800   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:05:44.207946   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:44.207946   13752 host.go:66] Checking if "multinode-025000" exists ...
	I0612 15:05:44.209193   13752 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000 for IP: 172.23.204.132
	I0612 15:05:44.209193   13752 certs.go:194] generating shared ca certs ...
	I0612 15:05:44.209331   13752 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 15:05:44.209694   13752 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0612 15:05:44.210508   13752 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0612 15:05:44.210628   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0612 15:05:44.210628   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0612 15:05:44.210628   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0612 15:05:44.211162   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0612 15:05:44.211756   13752 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem (1338 bytes)
	W0612 15:05:44.212064   13752 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280_empty.pem, impossibly tiny 0 bytes
	I0612 15:05:44.212141   13752 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0612 15:05:44.212371   13752 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0612 15:05:44.212601   13752 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0612 15:05:44.212838   13752 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0612 15:05:44.213654   13752 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem (1708 bytes)
	I0612 15:05:44.213875   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0612 15:05:44.214079   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem -> /usr/share/ca-certificates/1280.pem
	I0612 15:05:44.214278   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> /usr/share/ca-certificates/12802.pem
	I0612 15:05:44.214525   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 15:05:44.265097   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 15:05:44.315141   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 15:05:44.361754   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0612 15:05:44.411644   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 15:05:44.459910   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem --> /usr/share/ca-certificates/1280.pem (1338 bytes)
	I0612 15:05:44.506569   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem --> /usr/share/ca-certificates/12802.pem (1708 bytes)
	I0612 15:05:44.564759   13752 ssh_runner.go:195] Run: openssl version
	I0612 15:05:44.573797   13752 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0612 15:05:44.585415   13752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1280.pem && ln -fs /usr/share/ca-certificates/1280.pem /etc/ssl/certs/1280.pem"
	I0612 15:05:44.620988   13752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1280.pem
	I0612 15:05:44.627331   13752 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 12 20:15 /usr/share/ca-certificates/1280.pem
	I0612 15:05:44.628837   13752 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:15 /usr/share/ca-certificates/1280.pem
	I0612 15:05:44.644759   13752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1280.pem
	I0612 15:05:44.647443   13752 command_runner.go:130] > 51391683
	I0612 15:05:44.667423   13752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1280.pem /etc/ssl/certs/51391683.0"
	I0612 15:05:44.704038   13752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12802.pem && ln -fs /usr/share/ca-certificates/12802.pem /etc/ssl/certs/12802.pem"
	I0612 15:05:44.739020   13752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12802.pem
	I0612 15:05:44.746867   13752 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 12 20:15 /usr/share/ca-certificates/12802.pem
	I0612 15:05:44.746867   13752 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:15 /usr/share/ca-certificates/12802.pem
	I0612 15:05:44.757883   13752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12802.pem
	I0612 15:05:44.769373   13752 command_runner.go:130] > 3ec20f2e
	I0612 15:05:44.782071   13752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12802.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 15:05:44.814865   13752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 15:05:44.847078   13752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 15:05:44.855375   13752 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 12 20:00 /usr/share/ca-certificates/minikubeCA.pem
	I0612 15:05:44.855521   13752 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:00 /usr/share/ca-certificates/minikubeCA.pem
	I0612 15:05:44.865355   13752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 15:05:44.875520   13752 command_runner.go:130] > b5213941
	I0612 15:05:44.887608   13752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 15:05:44.920861   13752 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 15:05:44.929372   13752 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0612 15:05:44.929447   13752 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0612 15:05:44.929694   13752 kubeadm.go:928] updating node {m02 172.23.204.132 8443 v1.30.1 docker false true} ...
	I0612 15:05:44.929938   13752 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-025000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.204.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-025000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 15:05:44.943003   13752 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 15:05:44.963614   13752 command_runner.go:130] > kubeadm
	I0612 15:05:44.963614   13752 command_runner.go:130] > kubectl
	I0612 15:05:44.963614   13752 command_runner.go:130] > kubelet
	I0612 15:05:44.963805   13752 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 15:05:44.974929   13752 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0612 15:05:44.998453   13752 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0612 15:05:45.031017   13752 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 15:05:45.079311   13752 ssh_runner.go:195] Run: grep 172.23.200.184	control-plane.minikube.internal$ /etc/hosts
	I0612 15:05:45.089998   13752 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.200.184	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 15:05:45.124571   13752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 15:05:45.325168   13752 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 15:05:45.352400   13752 host.go:66] Checking if "multinode-025000" exists ...
	I0612 15:05:45.353142   13752 start.go:316] joinCluster: &{Name:multinode-025000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-025000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.200.184 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.204.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.23.206.72 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 15:05:45.353674   13752 start.go:329] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.23.204.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0612 15:05:45.353674   13752 host.go:66] Checking if "multinode-025000-m02" exists ...
	I0612 15:05:45.354009   13752 mustload.go:65] Loading cluster: multinode-025000
	I0612 15:05:45.354772   13752 config.go:182] Loaded profile config "multinode-025000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 15:05:45.355604   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:05:47.576539   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:05:47.581332   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:47.581537   13752 host.go:66] Checking if "multinode-025000" exists ...
	I0612 15:05:47.582151   13752 api_server.go:166] Checking apiserver status ...
	I0612 15:05:47.594257   13752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 15:05:47.594257   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:05:49.805232   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:05:49.805232   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:49.817884   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:05:52.427895   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:05:52.427990   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:52.428183   13752 sshutil.go:53] new ssh client: &{IP:172.23.200.184 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000\id_rsa Username:docker}
	I0612 15:05:52.544068   13752 command_runner.go:130] > 1830
	I0612 15:05:52.544339   13752 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.9500288s)
	I0612 15:05:52.557130   13752 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1830/cgroup
	W0612 15:05:52.577706   13752 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1830/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0612 15:05:52.590719   13752 ssh_runner.go:195] Run: ls
	I0612 15:05:52.600003   13752 api_server.go:253] Checking apiserver healthz at https://172.23.200.184:8443/healthz ...
	I0612 15:05:52.608430   13752 api_server.go:279] https://172.23.200.184:8443/healthz returned 200:
	ok
	I0612 15:05:52.622028   13752 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl drain multinode-025000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0612 15:05:52.787849   13752 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-v4cqk, kube-system/kube-proxy-tdcdp
	I0612 15:05:55.828219   13752 command_runner.go:130] > node/multinode-025000-m02 cordoned
	I0612 15:05:55.828300   13752 command_runner.go:130] > pod "busybox-fc5497c4f-9bsls" has DeletionTimestamp older than 1 seconds, skipping
	I0612 15:05:55.828300   13752 command_runner.go:130] > node/multinode-025000-m02 drained
	I0612 15:05:55.828300   13752 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl drain multinode-025000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.2062611s)
	I0612 15:05:55.828300   13752 node.go:128] successfully drained node "multinode-025000-m02"
	I0612 15:05:55.828508   13752 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0612 15:05:55.828770   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:05:58.012780   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:05:58.024308   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:58.024468   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:06:00.552515   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:06:00.552515   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:06:00.552515   13752 sshutil.go:53] new ssh client: &{IP:172.23.204.132 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000-m02\id_rsa Username:docker}
	I0612 15:06:01.052634   13752 command_runner.go:130] ! W0612 22:06:01.049967    1543 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0612 15:06:01.574423   13752 command_runner.go:130] ! W0612 22:06:01.569930    1543 cleanupnode.go:106] [reset] Failed to remove containers: failed to stop running pod 8dc88eb906f301af25ee91c757ea86831a611d0c2cbd9c6fc85b258149fa4c16: output: E0612 22:06:01.264106    1582 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-9bsls_default\" network: cni config uninitialized" podSandboxID="8dc88eb906f301af25ee91c757ea86831a611d0c2cbd9c6fc85b258149fa4c16"
	I0612 15:06:01.574508   13752 command_runner.go:130] ! time="2024-06-12T22:06:01Z" level=fatal msg="stopping the pod sandbox \"8dc88eb906f301af25ee91c757ea86831a611d0c2cbd9c6fc85b258149fa4c16\": rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-9bsls_default\" network: cni config uninitialized"
	I0612 15:06:01.574508   13752 command_runner.go:130] ! : exit status 1
	I0612 15:06:01.603350   13752 command_runner.go:130] > [preflight] Running pre-flight checks
	I0612 15:06:01.603350   13752 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0612 15:06:01.603350   13752 command_runner.go:130] > [reset] Stopping the kubelet service
	I0612 15:06:01.603350   13752 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0612 15:06:01.603350   13752 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0612 15:06:01.603350   13752 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0612 15:06:01.603350   13752 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0612 15:06:01.603350   13752 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0612 15:06:01.603350   13752 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0612 15:06:01.603350   13752 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0612 15:06:01.603350   13752 command_runner.go:130] > to reset your system's IPVS tables.
	I0612 15:06:01.603350   13752 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0612 15:06:01.603350   13752 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0612 15:06:01.603350   13752 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (5.7748225s)
	I0612 15:06:01.603350   13752 node.go:155] successfully reset node "multinode-025000-m02"
	I0612 15:06:01.604758   13752 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 15:06:01.605339   13752 kapi.go:59] client config for multinode-025000: &rest.Config{Host:"https://172.23.200.184:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-025000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-025000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x288e1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0612 15:06:01.606692   13752 cert_rotation.go:137] Starting client certificate rotation controller
	I0612 15:06:01.606878   13752 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0612 15:06:01.606878   13752 round_trippers.go:463] DELETE https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:01.606878   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:01.606878   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:01.606878   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:01.606878   13752 round_trippers.go:473]     Content-Type: application/json
	I0612 15:06:01.627326   13752 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0612 15:06:01.627326   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:01.627326   13752 round_trippers.go:580]     Content-Length: 171
	I0612 15:06:01.627326   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:01 GMT
	I0612 15:06:01.627326   13752 round_trippers.go:580]     Audit-Id: 01208d5e-ac15-40e6-b821-ffabd585b7a7
	I0612 15:06:01.633211   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:01.633211   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:01.633211   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:01.633211   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:01.633211   13752 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-025000-m02","kind":"nodes","uid":"795a4638-bf70-440d-a6a1-2f194ade7384"}}
	I0612 15:06:01.633211   13752 node.go:180] successfully deleted node "multinode-025000-m02"
	I0612 15:06:01.633211   13752 start.go:333] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.23.204.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0612 15:06:01.633211   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0612 15:06:01.633211   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:06:03.689503   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:06:03.689503   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:06:03.700060   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:06:06.211691   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:06:06.222483   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:06:06.222858   13752 sshutil.go:53] new ssh client: &{IP:172.23.200.184 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000\id_rsa Username:docker}
	I0612 15:06:06.407307   13752 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 7r4gb6.bv8tbrmt47yqfsdc --discovery-token-ca-cert-hash sha256:10c04e0412ada9d72a46398cbb6ecb6de5efcad2d747fb615b7e984406c55dc5 
	I0612 15:06:06.407307   13752 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.7740803s)
	I0612 15:06:06.407307   13752 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.23.204.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0612 15:06:06.407307   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7r4gb6.bv8tbrmt47yqfsdc --discovery-token-ca-cert-hash sha256:10c04e0412ada9d72a46398cbb6ecb6de5efcad2d747fb615b7e984406c55dc5 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-025000-m02"
	I0612 15:06:06.620598   13752 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 15:06:07.480579   13752 command_runner.go:130] > [preflight] Running pre-flight checks
	I0612 15:06:07.480579   13752 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0612 15:06:07.480579   13752 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0612 15:06:07.480579   13752 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 15:06:07.480579   13752 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 15:06:07.480579   13752 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0612 15:06:07.480579   13752 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0612 15:06:07.480579   13752 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 503.119418ms
	I0612 15:06:07.480579   13752 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0612 15:06:07.480579   13752 command_runner.go:130] > This node has joined the cluster:
	I0612 15:06:07.480579   13752 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0612 15:06:07.480579   13752 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0612 15:06:07.480579   13752 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0612 15:06:07.480579   13752 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7r4gb6.bv8tbrmt47yqfsdc --discovery-token-ca-cert-hash sha256:10c04e0412ada9d72a46398cbb6ecb6de5efcad2d747fb615b7e984406c55dc5 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-025000-m02": (1.0732687s)
	I0612 15:06:07.480579   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0612 15:06:07.686699   13752 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0612 15:06:07.886287   13752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-025000-m02 minikube.k8s.io/updated_at=2024_06_12T15_06_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97 minikube.k8s.io/name=multinode-025000 minikube.k8s.io/primary=false
	I0612 15:06:08.003256   13752 command_runner.go:130] > node/multinode-025000-m02 labeled
	I0612 15:06:08.003256   13752 start.go:318] duration metric: took 22.6500388s to joinCluster
	I0612 15:06:08.003256   13752 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.23.204.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0612 15:06:08.008504   13752 out.go:177] * Verifying Kubernetes components...
	I0612 15:06:08.004258   13752 config.go:182] Loaded profile config "multinode-025000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 15:06:08.025586   13752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 15:06:08.209009   13752 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 15:06:08.237210   13752 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 15:06:08.238198   13752 kapi.go:59] client config for multinode-025000: &rest.Config{Host:"https://172.23.200.184:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-025000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-025000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x288e1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0612 15:06:08.238963   13752 node_ready.go:35] waiting up to 6m0s for node "multinode-025000-m02" to be "Ready" ...
	I0612 15:06:08.239211   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:08.239211   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:08.239211   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:08.239211   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:08.242936   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:06:08.242936   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:08.242936   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:08.242936   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:08.243016   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:08.243016   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:08 GMT
	I0612 15:06:08.243016   13752 round_trippers.go:580]     Audit-Id: bbd3425f-3783-42d8-b83f-5a530af99375
	I0612 15:06:08.243016   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:08.243129   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2138","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3789 chars]
	I0612 15:06:08.744007   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:08.744109   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:08.744109   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:08.744109   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:08.747683   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:06:08.747683   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:08.747763   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:08.747763   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:08 GMT
	I0612 15:06:08.747763   13752 round_trippers.go:580]     Audit-Id: 13655efc-c1c1-4ce3-9eac-036dc7d24263
	I0612 15:06:08.747763   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:08.747763   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:08.747763   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:08.747763   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2138","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3789 chars]
	I0612 15:06:09.243796   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:09.243796   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:09.243796   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:09.243796   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:09.246762   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:06:09.246762   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:09.246762   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:09 GMT
	I0612 15:06:09.246762   13752 round_trippers.go:580]     Audit-Id: e8569864-a7fd-4431-970b-65ecf62cc822
	I0612 15:06:09.246762   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:09.246762   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:09.246762   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:09.246762   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:09.249446   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2138","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3789 chars]
	I0612 15:06:09.749026   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:09.749120   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:09.749120   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:09.749207   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:09.752573   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:06:09.752573   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:09.752573   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:09.752573   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:09.752573   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:09 GMT
	I0612 15:06:09.752573   13752 round_trippers.go:580]     Audit-Id: 1e9c4978-e2ec-46e7-92d3-89c5fd10acef
	I0612 15:06:09.752573   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:09.752573   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:09.753557   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2145","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3898 chars]
	I0612 15:06:10.253638   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:10.253638   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:10.253638   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:10.253638   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:10.258536   13752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 15:06:10.258536   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:10.258536   13752 round_trippers.go:580]     Audit-Id: 7ea82fb2-f4eb-4425-a221-6c96965459d5
	I0612 15:06:10.258536   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:10.258536   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:10.258536   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:10.258536   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:10.258536   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:10 GMT
	I0612 15:06:10.259070   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2145","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3898 chars]
	I0612 15:06:10.259618   13752 node_ready.go:53] node "multinode-025000-m02" has status "Ready":"False"
	I0612 15:06:10.742905   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:10.743016   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:10.743016   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:10.743081   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:10.751393   13752 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0612 15:06:10.751393   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:10.751393   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:10.751393   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:10.751393   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:10.751393   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:10.751393   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:10 GMT
	I0612 15:06:10.751393   13752 round_trippers.go:580]     Audit-Id: fb12fb92-82c7-4844-9bec-41394cdc0850
	I0612 15:06:10.751393   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2145","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3898 chars]
	I0612 15:06:11.250847   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:11.250847   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:11.250847   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:11.250932   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:11.254567   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:06:11.254567   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:11.254884   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:11 GMT
	I0612 15:06:11.254884   13752 round_trippers.go:580]     Audit-Id: d8b9dfa5-ef19-451c-88a4-eaa087b7c3df
	I0612 15:06:11.254884   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:11.254884   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:11.254884   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:11.254884   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:11.255084   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2145","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3898 chars]
	I0612 15:06:11.752245   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:11.752329   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:11.752329   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:11.752329   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:11.756218   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:06:11.756218   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:11.756218   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:11 GMT
	I0612 15:06:11.756218   13752 round_trippers.go:580]     Audit-Id: a8db07f8-e087-4ee3-8b2e-f35286b68800
	I0612 15:06:11.756218   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:11.756218   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:11.756218   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:11.756218   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:11.756218   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2145","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3898 chars]
	I0612 15:06:12.252769   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:12.252769   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:12.252769   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:12.252769   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:12.260792   13752 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0612 15:06:12.261263   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:12.261263   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:12.261263   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:12.261263   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:12 GMT
	I0612 15:06:12.261263   13752 round_trippers.go:580]     Audit-Id: 651b1b22-a56d-4674-a288-4181fe50dfe9
	I0612 15:06:12.261263   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:12.261263   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:12.261263   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2145","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3898 chars]
	I0612 15:06:12.261944   13752 node_ready.go:53] node "multinode-025000-m02" has status "Ready":"False"
	I0612 15:06:12.739972   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:12.740161   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:12.740161   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:12.740161   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:12.743507   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:06:12.743507   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:12.743507   13752 round_trippers.go:580]     Audit-Id: ddb2b769-7566-4938-a6ba-3292e436dfef
	I0612 15:06:12.744305   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:12.744305   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:12.744305   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:12.744305   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:12.744305   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:12 GMT
	I0612 15:06:12.744735   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2145","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3898 chars]
	I0612 15:06:13.241569   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:13.241569   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:13.241569   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:13.241569   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:13.246203   13752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 15:06:13.246203   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:13.246203   13752 round_trippers.go:580]     Audit-Id: 281d352c-d865-4aae-b4f6-0b27e69d52f9
	I0612 15:06:13.246289   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:13.246289   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:13.246289   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:13.246289   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:13.246289   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:13 GMT
	I0612 15:06:13.246549   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2145","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3898 chars]
	I0612 15:06:13.740410   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:13.740500   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:13.740500   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:13.740500   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:13.744955   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:06:13.744991   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:13.744991   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:13.744991   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:13 GMT
	I0612 15:06:13.745096   13752 round_trippers.go:580]     Audit-Id: 17e3d26b-b10e-4205-bbb5-412836aeb7b4
	I0612 15:06:13.745096   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:13.745096   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:13.745096   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:13.745505   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2145","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3898 chars]
	I0612 15:06:14.240601   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:14.240682   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:14.240682   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:14.240682   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:14.244546   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:06:14.244546   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:14.244546   13752 round_trippers.go:580]     Audit-Id: f09abef8-5fd6-4eb5-98d9-7f2d3987642a
	I0612 15:06:14.244546   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:14.244891   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:14.244891   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:14.244891   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:14.244891   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:14 GMT
	I0612 15:06:14.245072   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2145","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3898 chars]
	I0612 15:06:14.739629   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:14.739751   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:14.739751   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:14.739829   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:14.749271   13752 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0612 15:06:14.749271   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:14.749271   13752 round_trippers.go:580]     Audit-Id: 8ce7d0db-ea76-4501-a939-8fe4f2a9ae78
	I0612 15:06:14.749271   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:14.749271   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:14.749271   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:14.749271   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:14.749271   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:14 GMT
	I0612 15:06:14.749271   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2145","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3898 chars]
	I0612 15:06:14.750327   13752 node_ready.go:53] node "multinode-025000-m02" has status "Ready":"False"
	I0612 15:06:15.250657   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:15.250721   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:15.250721   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:15.250721   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:15.260447   13752 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0612 15:06:15.260578   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:15.260578   13752 round_trippers.go:580]     Audit-Id: c88c2121-1272-4cb3-acc3-1244083e9b7f
	I0612 15:06:15.260578   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:15.260651   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:15.260651   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:15.260651   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:15.260651   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:15 GMT
	I0612 15:06:15.260817   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2145","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3898 chars]
	I0612 15:06:15.752170   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:15.752170   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:15.752170   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:15.752170   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:15.757282   13752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 15:06:15.757282   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:15.757282   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:15.757282   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:15.757282   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:15.757282   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:15.757282   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:15 GMT
	I0612 15:06:15.757282   13752 round_trippers.go:580]     Audit-Id: 5b05b8d7-25a2-4d8b-9fae-67f3de76fad9
	I0612 15:06:15.757282   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2145","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3898 chars]
	I0612 15:06:16.253366   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:16.253366   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:16.253430   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:16.253430   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:16.257258   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:06:16.257698   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:16.257698   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:16.257698   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:16.257698   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:16.257698   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:16 GMT
	I0612 15:06:16.257698   13752 round_trippers.go:580]     Audit-Id: d50e984d-adb2-4e9e-a2f4-01492d664abb
	I0612 15:06:16.257698   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:16.259469   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2145","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3898 chars]
	I0612 15:06:16.752694   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:16.752694   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:16.752958   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:16.752958   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:16.760419   13752 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 15:06:16.760593   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:16.760593   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:16.760593   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:16.760593   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:16.760593   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:16.760593   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:16 GMT
	I0612 15:06:16.760593   13752 round_trippers.go:580]     Audit-Id: 69d01f06-fa40-4561-83bb-edad1ac5973b
	I0612 15:06:16.762380   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2145","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3898 chars]
	I0612 15:06:16.762380   13752 node_ready.go:53] node "multinode-025000-m02" has status "Ready":"False"
	I0612 15:06:17.252887   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:17.252887   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:17.252887   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:17.252887   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:17.256727   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:06:17.257435   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:17.257435   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:17.257435   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:17.257435   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:17.257435   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:17 GMT
	I0612 15:06:17.257435   13752 round_trippers.go:580]     Audit-Id: 63b43051-bfa9-4123-898d-52465adc9144
	I0612 15:06:17.257435   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:17.257435   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2170","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3933 chars]
	I0612 15:06:17.258204   13752 node_ready.go:49] node "multinode-025000-m02" has status "Ready":"True"
	I0612 15:06:17.258204   13752 node_ready.go:38] duration metric: took 9.0191279s for node "multinode-025000-m02" to be "Ready" ...
	I0612 15:06:17.258204   13752 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 15:06:17.258204   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods
	I0612 15:06:17.258731   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:17.258731   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:17.258852   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:17.263985   13752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 15:06:17.263985   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:17.263985   13752 round_trippers.go:580]     Audit-Id: 6502e7b7-93d0-43dc-bd9c-a7595ad1e5d9
	I0612 15:06:17.263985   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:17.263985   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:17.263985   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:17.263985   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:17.264385   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:17 GMT
	I0612 15:06:17.267291   13752 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2173"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1975","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86160 chars]
	I0612 15:06:17.270940   13752 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace to be "Ready" ...
	I0612 15:06:17.270940   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:06:17.270940   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:17.270940   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:17.270940   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:17.273803   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:06:17.273803   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:17.273803   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:17.273803   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:17.273803   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:17.273803   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:17.273803   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:17 GMT
	I0612 15:06:17.273803   13752 round_trippers.go:580]     Audit-Id: 49dc6815-8a94-46cf-b4c9-0dc14ef5fcf4
	I0612 15:06:17.274879   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1975","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6790 chars]
	I0612 15:06:17.275167   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:06:17.275167   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:17.275167   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:17.275167   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:17.277727   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:06:17.278356   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:17.278356   13752 round_trippers.go:580]     Audit-Id: da26319a-0c45-4169-99f2-eb7328d58e3f
	I0612 15:06:17.278356   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:17.278356   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:17.278356   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:17.278356   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:17.278356   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:17 GMT
	I0612 15:06:17.278732   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:06:17.279146   13752 pod_ready.go:92] pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace has status "Ready":"True"
	I0612 15:06:17.279146   13752 pod_ready.go:81] duration metric: took 8.2062ms for pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace to be "Ready" ...
	I0612 15:06:17.279233   13752 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:06:17.279302   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-025000
	I0612 15:06:17.279302   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:17.279338   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:17.279338   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:17.282208   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:06:17.282208   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:17.282208   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:17.282208   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:17.282208   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:17.282208   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:17.282208   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:17 GMT
	I0612 15:06:17.282208   13752 round_trippers.go:580]     Audit-Id: 9543d149-af3f-4949-8b18-05de62295166
	I0612 15:06:17.282613   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-025000","namespace":"kube-system","uid":"be41c4a6-88ce-4e08-9b7c-16c0b4f3eec2","resourceVersion":"1875","creationTimestamp":"2024-06-12T22:02:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.200.184:2379","kubernetes.io/config.hash":"7b6b5637642f3d915c0db1461c7074e6","kubernetes.io/config.mirror":"7b6b5637642f3d915c0db1461c7074e6","kubernetes.io/config.seen":"2024-06-12T22:02:25.563300686Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6171 chars]
	I0612 15:06:17.283360   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:06:17.283360   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:17.283360   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:17.283511   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:17.285759   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:06:17.285980   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:17.285980   13752 round_trippers.go:580]     Audit-Id: 9ec89d8c-474d-4de4-8eb8-91ef510d22cc
	I0612 15:06:17.286039   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:17.286039   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:17.286039   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:17.286039   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:17.286039   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:17 GMT
	I0612 15:06:17.286424   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:06:17.286519   13752 pod_ready.go:92] pod "etcd-multinode-025000" in "kube-system" namespace has status "Ready":"True"
	I0612 15:06:17.286519   13752 pod_ready.go:81] duration metric: took 7.2857ms for pod "etcd-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:06:17.286519   13752 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:06:17.286519   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-025000
	I0612 15:06:17.286519   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:17.286519   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:17.286519   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:17.289721   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:06:17.289824   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:17.289824   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:17.289907   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:17 GMT
	I0612 15:06:17.289963   13752 round_trippers.go:580]     Audit-Id: d2ea6228-ca21-4202-8144-e0b618f9b6c5
	I0612 15:06:17.289963   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:17.289963   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:17.289963   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:17.290031   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-025000","namespace":"kube-system","uid":"63e55411-d432-4e5a-becc-fae0887fecae","resourceVersion":"1897","creationTimestamp":"2024-06-12T22:02:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.200.184:8443","kubernetes.io/config.hash":"d6071cd4356268889f798790dc93ce06","kubernetes.io/config.mirror":"d6071cd4356268889f798790dc93ce06","kubernetes.io/config.seen":"2024-06-12T22:02:25.478872091Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7705 chars]
	I0612 15:06:17.290984   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:06:17.290984   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:17.290984   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:17.290984   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:17.293395   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:06:17.293395   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:17.293774   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:17.293774   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:17.293774   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:17.293774   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:17.293868   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:17 GMT
	I0612 15:06:17.294008   13752 round_trippers.go:580]     Audit-Id: a94194a8-2c21-4b96-bb21-b96fe8d08ee1
	I0612 15:06:17.294323   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:06:17.294909   13752 pod_ready.go:92] pod "kube-apiserver-multinode-025000" in "kube-system" namespace has status "Ready":"True"
	I0612 15:06:17.294996   13752 pod_ready.go:81] duration metric: took 8.4766ms for pod "kube-apiserver-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:06:17.294996   13752 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:06:17.295194   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-025000
	I0612 15:06:17.295194   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:17.295194   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:17.295194   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:17.300492   13752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 15:06:17.300492   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:17.300492   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:17 GMT
	I0612 15:06:17.300492   13752 round_trippers.go:580]     Audit-Id: a16f2ac3-462d-4032-b8f8-a3b0abf05ad5
	I0612 15:06:17.300492   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:17.300492   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:17.300590   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:17.300590   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:17.300889   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-025000","namespace":"kube-system","uid":"68c9aa4f-49ee-439c-ad51-7943e65c0085","resourceVersion":"1895","creationTimestamp":"2024-06-12T21:39:30Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"88de11d8b1aaec126153d44e87c4b5dd","kubernetes.io/config.mirror":"88de11d8b1aaec126153d44e87c4b5dd","kubernetes.io/config.seen":"2024-06-12T21:39:23.999674614Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0612 15:06:17.301558   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:06:17.301656   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:17.301656   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:17.301656   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:17.304419   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:06:17.304419   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:17.304419   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:17.304419   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:17 GMT
	I0612 15:06:17.304419   13752 round_trippers.go:580]     Audit-Id: d6ee5714-1c76-43bf-b1b2-3888afac52de
	I0612 15:06:17.304419   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:17.304419   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:17.304419   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:17.304419   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:06:17.304419   13752 pod_ready.go:92] pod "kube-controller-manager-multinode-025000" in "kube-system" namespace has status "Ready":"True"
	I0612 15:06:17.304419   13752 pod_ready.go:81] duration metric: took 9.4239ms for pod "kube-controller-manager-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:06:17.304419   13752 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-47lr8" in "kube-system" namespace to be "Ready" ...
	I0612 15:06:17.468181   13752 request.go:629] Waited for 163.5049ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-proxy-47lr8
	I0612 15:06:17.468280   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-proxy-47lr8
	I0612 15:06:17.468349   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:17.468349   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:17.468349   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:17.472267   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:06:17.472267   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:17.472267   13752 round_trippers.go:580]     Audit-Id: 4099e2c6-fa22-4d62-a01a-fda57fcbd95e
	I0612 15:06:17.472267   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:17.472267   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:17.472267   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:17.472267   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:17.472267   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:17 GMT
	I0612 15:06:17.472453   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-47lr8","generateName":"kube-proxy-","namespace":"kube-system","uid":"10b24fa7-8eea-4fbb-ab18-404e853aa7ab","resourceVersion":"1793","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b44c21bc-e2cc-415b-bc2f-616adabe0681","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b44c21bc-e2cc-415b-bc2f-616adabe0681\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0612 15:06:17.655141   13752 request.go:629] Waited for 181.884ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:06:17.655401   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:06:17.655520   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:17.655520   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:17.655566   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:17.659977   13752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 15:06:17.659977   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:17.659977   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:17.659977   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:17 GMT
	I0612 15:06:17.659977   13752 round_trippers.go:580]     Audit-Id: f186c650-bfd0-4cee-98c1-ef76bc6d3c38
	I0612 15:06:17.660316   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:17.660316   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:17.660316   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:17.660434   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:06:17.661352   13752 pod_ready.go:92] pod "kube-proxy-47lr8" in "kube-system" namespace has status "Ready":"True"
	I0612 15:06:17.661352   13752 pod_ready.go:81] duration metric: took 356.9312ms for pod "kube-proxy-47lr8" in "kube-system" namespace to be "Ready" ...
	I0612 15:06:17.661488   13752 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7jwdg" in "kube-system" namespace to be "Ready" ...
	I0612 15:06:17.856501   13752 request.go:629] Waited for 194.7294ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7jwdg
	I0612 15:06:17.856501   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7jwdg
	I0612 15:06:17.856501   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:17.856501   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:17.856501   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:17.860079   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:06:17.860079   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:17.860079   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:17.860079   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:17.860079   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:17 GMT
	I0612 15:06:17.860079   13752 round_trippers.go:580]     Audit-Id: 5ca5cb14-3561-4a97-9b4e-df25600c7d70
	I0612 15:06:17.860079   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:17.860079   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:17.861101   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7jwdg","generateName":"kube-proxy-","namespace":"kube-system","uid":"643030f7-b876-4243-bacc-04205e88cc9e","resourceVersion":"1748","creationTimestamp":"2024-06-12T21:47:16Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b44c21bc-e2cc-415b-bc2f-616adabe0681","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:47:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b44c21bc-e2cc-415b-bc2f-616adabe0681\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6062 chars]
	I0612 15:06:18.060098   13752 request.go:629] Waited for 197.7279ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m03
	I0612 15:06:18.060462   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m03
	I0612 15:06:18.060462   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:18.060508   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:18.060508   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:18.064915   13752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 15:06:18.065540   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:18.065540   13752 round_trippers.go:580]     Audit-Id: db5203be-bbdc-4c51-ad76-1a303bb0065d
	I0612 15:06:18.065540   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:18.065540   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:18.065627   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:18.065627   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:18.065627   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:18 GMT
	I0612 15:06:18.065897   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m03","uid":"9d457bc2-c46f-4b5d-8023-5c06ef6198c7","resourceVersion":"1913","creationTimestamp":"2024-06-12T21:57:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_57_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:57:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4399 chars]
	I0612 15:06:18.066510   13752 pod_ready.go:97] node "multinode-025000-m03" hosting pod "kube-proxy-7jwdg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000-m03" has status "Ready":"Unknown"
	I0612 15:06:18.066536   13752 pod_ready.go:81] duration metric: took 405.0463ms for pod "kube-proxy-7jwdg" in "kube-system" namespace to be "Ready" ...
	E0612 15:06:18.066536   13752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-025000-m03" hosting pod "kube-proxy-7jwdg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000-m03" has status "Ready":"Unknown"
	I0612 15:06:18.066536   13752 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tdcdp" in "kube-system" namespace to be "Ready" ...
	I0612 15:06:18.264859   13752 request.go:629] Waited for 198.1942ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tdcdp
	I0612 15:06:18.264859   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tdcdp
	I0612 15:06:18.264859   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:18.265014   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:18.265014   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:18.269250   13752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 15:06:18.269250   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:18.269250   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:18.269250   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:18 GMT
	I0612 15:06:18.269250   13752 round_trippers.go:580]     Audit-Id: 8e4f04e7-e18f-432b-9541-ba06e0420547
	I0612 15:06:18.269250   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:18.269250   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:18.269250   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:18.269626   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tdcdp","generateName":"kube-proxy-","namespace":"kube-system","uid":"b623833c-ce55-46b1-a840-99b3143adac1","resourceVersion":"2151","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b44c21bc-e2cc-415b-bc2f-616adabe0681","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b44c21bc-e2cc-415b-bc2f-616adabe0681\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5842 chars]
	I0612 15:06:18.467478   13752 request.go:629] Waited for 196.8774ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:18.467605   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:18.467605   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:18.467605   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:18.467605   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:18.471330   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:06:18.472181   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:18.472181   13752 round_trippers.go:580]     Audit-Id: 3d59ecb5-0fbb-4bd4-8a7f-5d0b4e7f4ae0
	I0612 15:06:18.472181   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:18.472181   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:18.472181   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:18.472181   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:18.472181   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:18 GMT
	I0612 15:06:18.472638   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2170","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3933 chars]
	I0612 15:06:18.473123   13752 pod_ready.go:92] pod "kube-proxy-tdcdp" in "kube-system" namespace has status "Ready":"True"
	I0612 15:06:18.473123   13752 pod_ready.go:81] duration metric: took 406.5859ms for pod "kube-proxy-tdcdp" in "kube-system" namespace to be "Ready" ...
	I0612 15:06:18.473206   13752 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:06:18.653827   13752 request.go:629] Waited for 180.3453ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-025000
	I0612 15:06:18.653933   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-025000
	I0612 15:06:18.653933   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:18.653933   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:18.654069   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:18.657416   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:06:18.657416   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:18.657416   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:18.657416   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:18.657948   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:18 GMT
	I0612 15:06:18.657948   13752 round_trippers.go:580]     Audit-Id: f3e17f0e-0fca-4846-a5cc-916171b94ef8
	I0612 15:06:18.657948   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:18.657948   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:18.658276   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-025000","namespace":"kube-system","uid":"83b272cb-1286-47d8-bcb1-a66056dff2a5","resourceVersion":"1865","creationTimestamp":"2024-06-12T21:39:31Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"de62e7fd7d0feea82620e745032c1a67","kubernetes.io/config.mirror":"de62e7fd7d0feea82620e745032c1a67","kubernetes.io/config.seen":"2024-06-12T21:39:31.214466565Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0612 15:06:18.855871   13752 request.go:629] Waited for 196.6766ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:06:18.855871   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:06:18.855871   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:18.855871   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:18.855871   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:18.859996   13752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 15:06:18.859996   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:18.859996   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:18 GMT
	I0612 15:06:18.859996   13752 round_trippers.go:580]     Audit-Id: b5738e96-a4aa-4c60-a6b8-ab5f4a242595
	I0612 15:06:18.859996   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:18.860463   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:18.860463   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:18.860463   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:18.861286   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:06:18.861795   13752 pod_ready.go:92] pod "kube-scheduler-multinode-025000" in "kube-system" namespace has status "Ready":"True"
	I0612 15:06:18.861878   13752 pod_ready.go:81] duration metric: took 388.6702ms for pod "kube-scheduler-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:06:18.861878   13752 pod_ready.go:38] duration metric: took 1.6036681s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 15:06:18.861966   13752 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 15:06:18.873743   13752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 15:06:18.905776   13752 system_svc.go:56] duration metric: took 43.8101ms WaitForService to wait for kubelet
	I0612 15:06:18.905776   13752 kubeadm.go:576] duration metric: took 10.9014821s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 15:06:18.905776   13752 node_conditions.go:102] verifying NodePressure condition ...
	I0612 15:06:19.059262   13752 request.go:629] Waited for 153.4854ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/nodes
	I0612 15:06:19.059413   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes
	I0612 15:06:19.059413   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:19.059413   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:19.059413   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:19.065979   13752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 15:06:19.065979   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:19.065979   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:19.065979   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:19.065979   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:19 GMT
	I0612 15:06:19.065979   13752 round_trippers.go:580]     Audit-Id: 0179ee51-13d6-4e75-97a4-fd0d5877edfc
	I0612 15:06:19.065979   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:19.065979   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:19.067024   13752 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2175"},"items":[{"metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15609 chars]
	I0612 15:06:19.068103   13752 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 15:06:19.068165   13752 node_conditions.go:123] node cpu capacity is 2
	I0612 15:06:19.068165   13752 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 15:06:19.068165   13752 node_conditions.go:123] node cpu capacity is 2
	I0612 15:06:19.068165   13752 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 15:06:19.068271   13752 node_conditions.go:123] node cpu capacity is 2
	I0612 15:06:19.068271   13752 node_conditions.go:105] duration metric: took 162.4941ms to run NodePressure ...
	I0612 15:06:19.068271   13752 start.go:240] waiting for startup goroutines ...
	I0612 15:06:19.068335   13752 start.go:254] writing updated cluster config ...
	I0612 15:06:19.073446   13752 out.go:177] 
	I0612 15:06:19.076540   13752 config.go:182] Loaded profile config "ha-957600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 15:06:19.084693   13752 config.go:182] Loaded profile config "multinode-025000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 15:06:19.085688   13752 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\config.json ...
	I0612 15:06:19.091739   13752 out.go:177] * Starting "multinode-025000-m03" worker node in "multinode-025000" cluster
	I0612 15:06:19.095317   13752 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0612 15:06:19.095317   13752 cache.go:56] Caching tarball of preloaded images
	I0612 15:06:19.095570   13752 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0612 15:06:19.095570   13752 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0612 15:06:19.095570   13752 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\config.json ...
	I0612 15:06:19.099810   13752 start.go:360] acquireMachinesLock for multinode-025000-m03: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 15:06:19.099810   13752 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-025000-m03"
	I0612 15:06:19.100656   13752 start.go:96] Skipping create...Using existing machine configuration
	I0612 15:06:19.100656   13752 fix.go:54] fixHost starting: m03
	I0612 15:06:19.100872   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m03 ).state
	I0612 15:06:21.260404   13752 main.go:141] libmachine: [stdout =====>] : Off
	
	I0612 15:06:21.260557   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:06:21.260616   13752 fix.go:112] recreateIfNeeded on multinode-025000-m03: state=Stopped err=<nil>
	W0612 15:06:21.260616   13752 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 15:06:21.265241   13752 out.go:177] * Restarting existing hyperv VM for "multinode-025000-m03" ...
	I0612 15:06:21.267720   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-025000-m03
	I0612 15:06:24.349628   13752 main.go:141] libmachine: [stdout =====>] : 
	I0612 15:06:24.349628   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:06:24.349628   13752 main.go:141] libmachine: Waiting for host to start...
	I0612 15:06:24.349628   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m03 ).state
	I0612 15:06:26.672510   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:06:26.672510   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:06:26.672510   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m03 ).networkadapters[0]).ipaddresses[0]
	I0612 15:06:29.270292   13752 main.go:141] libmachine: [stdout =====>] : 
	I0612 15:06:29.270292   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:06:30.275912   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m03 ).state

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-025000" : exit status 1
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-025000
multinode_test.go:331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node list -p multinode-025000: context deadline exceeded (345.9µs)
multinode_test.go:333: failed to run node list. args "out/minikube-windows-amd64.exe node list -p multinode-025000" : context deadline exceeded
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-025000	172.23.198.154
multinode-025000-m02	172.23.196.105
multinode-025000-m03	172.23.206.72

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-025000 -n multinode-025000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-025000 -n multinode-025000: (12.2993786s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 logs -n 25: (11.4102083s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                          Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| cp      | multinode-025000 cp testdata\cp-test.txt                                                                                | multinode-025000 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:50 PDT | 12 Jun 24 14:50 PDT |
	|         | multinode-025000-m02:/home/docker/cp-test.txt                                                                           |                  |                   |         |                     |                     |
	| ssh     | multinode-025000 ssh -n                                                                                                 | multinode-025000 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:50 PDT | 12 Jun 24 14:51 PDT |
	|         | multinode-025000-m02 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| cp      | multinode-025000 cp multinode-025000-m02:/home/docker/cp-test.txt                                                       | multinode-025000 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:51 PDT | 12 Jun 24 14:51 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile283731824\001\cp-test_multinode-025000-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-025000 ssh -n                                                                                                 | multinode-025000 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:51 PDT | 12 Jun 24 14:51 PDT |
	|         | multinode-025000-m02 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| cp      | multinode-025000 cp multinode-025000-m02:/home/docker/cp-test.txt                                                       | multinode-025000 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:51 PDT | 12 Jun 24 14:51 PDT |
	|         | multinode-025000:/home/docker/cp-test_multinode-025000-m02_multinode-025000.txt                                         |                  |                   |         |                     |                     |
	| ssh     | multinode-025000 ssh -n                                                                                                 | multinode-025000 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:51 PDT | 12 Jun 24 14:51 PDT |
	|         | multinode-025000-m02 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-025000 ssh -n multinode-025000 sudo cat                                                                       | multinode-025000 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:51 PDT | 12 Jun 24 14:51 PDT |
	|         | /home/docker/cp-test_multinode-025000-m02_multinode-025000.txt                                                          |                  |                   |         |                     |                     |
	| cp      | multinode-025000 cp multinode-025000-m02:/home/docker/cp-test.txt                                                       | multinode-025000 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:51 PDT | 12 Jun 24 14:52 PDT |
	|         | multinode-025000-m03:/home/docker/cp-test_multinode-025000-m02_multinode-025000-m03.txt                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-025000 ssh -n                                                                                                 | multinode-025000 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:52 PDT | 12 Jun 24 14:52 PDT |
	|         | multinode-025000-m02 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-025000 ssh -n multinode-025000-m03 sudo cat                                                                   | multinode-025000 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:52 PDT | 12 Jun 24 14:52 PDT |
	|         | /home/docker/cp-test_multinode-025000-m02_multinode-025000-m03.txt                                                      |                  |                   |         |                     |                     |
	| cp      | multinode-025000 cp testdata\cp-test.txt                                                                                | multinode-025000 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:52 PDT | 12 Jun 24 14:52 PDT |
	|         | multinode-025000-m03:/home/docker/cp-test.txt                                                                           |                  |                   |         |                     |                     |
	| ssh     | multinode-025000 ssh -n                                                                                                 | multinode-025000 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:52 PDT | 12 Jun 24 14:52 PDT |
	|         | multinode-025000-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| cp      | multinode-025000 cp multinode-025000-m03:/home/docker/cp-test.txt                                                       | multinode-025000 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:52 PDT | 12 Jun 24 14:53 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile283731824\001\cp-test_multinode-025000-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-025000 ssh -n                                                                                                 | multinode-025000 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:53 PDT | 12 Jun 24 14:53 PDT |
	|         | multinode-025000-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| cp      | multinode-025000 cp multinode-025000-m03:/home/docker/cp-test.txt                                                       | multinode-025000 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:53 PDT | 12 Jun 24 14:53 PDT |
	|         | multinode-025000:/home/docker/cp-test_multinode-025000-m03_multinode-025000.txt                                         |                  |                   |         |                     |                     |
	| ssh     | multinode-025000 ssh -n                                                                                                 | multinode-025000 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:53 PDT | 12 Jun 24 14:53 PDT |
	|         | multinode-025000-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-025000 ssh -n multinode-025000 sudo cat                                                                       | multinode-025000 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:53 PDT | 12 Jun 24 14:53 PDT |
	|         | /home/docker/cp-test_multinode-025000-m03_multinode-025000.txt                                                          |                  |                   |         |                     |                     |
	| cp      | multinode-025000 cp multinode-025000-m03:/home/docker/cp-test.txt                                                       | multinode-025000 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:53 PDT | 12 Jun 24 14:54 PDT |
	|         | multinode-025000-m02:/home/docker/cp-test_multinode-025000-m03_multinode-025000-m02.txt                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-025000 ssh -n                                                                                                 | multinode-025000 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:54 PDT | 12 Jun 24 14:54 PDT |
	|         | multinode-025000-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-025000 ssh -n multinode-025000-m02 sudo cat                                                                   | multinode-025000 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:54 PDT | 12 Jun 24 14:54 PDT |
	|         | /home/docker/cp-test_multinode-025000-m03_multinode-025000-m02.txt                                                      |                  |                   |         |                     |                     |
	| node    | multinode-025000 node stop m03                                                                                          | multinode-025000 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:54 PDT | 12 Jun 24 14:54 PDT |
	| node    | multinode-025000 node start                                                                                             | multinode-025000 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:55 PDT | 12 Jun 24 14:58 PDT |
	|         | m03 -v=7 --alsologtostderr                                                                                              |                  |                   |         |                     |                     |
	| node    | list -p multinode-025000                                                                                                | multinode-025000 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:58 PDT |                     |
	| stop    | -p multinode-025000                                                                                                     | multinode-025000 | minikube1\jenkins | v1.33.1 | 12 Jun 24 14:58 PDT | 12 Jun 24 15:00 PDT |
	| start   | -p multinode-025000                                                                                                     | multinode-025000 | minikube1\jenkins | v1.33.1 | 12 Jun 24 15:00 PDT |                     |
	|         | --wait=true -v=8                                                                                                        |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                       |                  |                   |         |                     |                     |
	|---------|-------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/12 15:00:23
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0612 15:00:23.024570   13752 out.go:291] Setting OutFile to fd 1068 ...
	I0612 15:00:23.025445   13752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 15:00:23.025445   13752 out.go:304] Setting ErrFile to fd 1628...
	I0612 15:00:23.025445   13752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 15:00:23.051240   13752 out.go:298] Setting JSON to false
	I0612 15:00:23.055591   13752 start.go:129] hostinfo: {"hostname":"minikube1","uptime":27975,"bootTime":1718201647,"procs":200,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4529 Build 19045.4529","kernelVersion":"10.0.19045.4529 Build 19045.4529","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0612 15:00:23.055591   13752 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0612 15:00:23.136005   13752 out.go:177] * [multinode-025000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	I0612 15:00:23.144243   13752 notify.go:220] Checking for updates...
	I0612 15:00:23.180967   13752 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 15:00:23.194523   13752 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 15:00:23.232736   13752 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0612 15:00:23.241902   13752 out.go:177]   - MINIKUBE_LOCATION=19044
	I0612 15:00:23.280655   13752 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 15:00:23.376454   13752 config.go:182] Loaded profile config "multinode-025000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 15:00:23.376454   13752 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 15:00:28.888107   13752 out.go:177] * Using the hyperv driver based on existing profile
	I0612 15:00:28.939003   13752 start.go:297] selected driver: hyperv
	I0612 15:00:28.939977   13752 start.go:901] validating driver "hyperv" against &{Name:multinode-025000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.30.1 ClusterName:multinode-025000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.198.154 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.196.105 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.23.206.72 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 15:00:28.940472   13752 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0612 15:00:28.993223   13752 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 15:00:28.993326   13752 cni.go:84] Creating CNI manager for ""
	I0612 15:00:28.993326   13752 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0612 15:00:28.993515   13752 start.go:340] cluster config:
	{Name:multinode-025000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-025000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.198.154 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.196.105 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.23.206.72 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provision
er:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 15:00:28.993966   13752 iso.go:125] acquiring lock: {Name:mk052eb609047b80b971cea5054470b0706b5b41 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 15:00:29.075819   13752 out.go:177] * Starting "multinode-025000" primary control-plane node in "multinode-025000" cluster
	I0612 15:00:29.085745   13752 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0612 15:00:29.085745   13752 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0612 15:00:29.085745   13752 cache.go:56] Caching tarball of preloaded images
	I0612 15:00:29.086702   13752 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0612 15:00:29.086702   13752 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0612 15:00:29.086702   13752 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\config.json ...
	I0612 15:00:29.089982   13752 start.go:360] acquireMachinesLock for multinode-025000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 15:00:29.090206   13752 start.go:364] duration metric: took 113.1µs to acquireMachinesLock for "multinode-025000"
	I0612 15:00:29.090382   13752 start.go:96] Skipping create...Using existing machine configuration
	I0612 15:00:29.090382   13752 fix.go:54] fixHost starting: 
	I0612 15:00:29.090911   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:00:31.876279   13752 main.go:141] libmachine: [stdout =====>] : Off
	
	I0612 15:00:31.876676   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:00:31.876676   13752 fix.go:112] recreateIfNeeded on multinode-025000: state=Stopped err=<nil>
	W0612 15:00:31.876676   13752 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 15:00:31.899886   13752 out.go:177] * Restarting existing hyperv VM for "multinode-025000" ...
	I0612 15:00:31.920140   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-025000
	I0612 15:00:34.982854   13752 main.go:141] libmachine: [stdout =====>] : 
	I0612 15:00:34.982854   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:00:34.982854   13752 main.go:141] libmachine: Waiting for host to start...
	I0612 15:00:34.982854   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:00:37.224031   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:00:37.224147   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:00:37.224147   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:00:39.720049   13752 main.go:141] libmachine: [stdout =====>] : 
	I0612 15:00:39.720049   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:00:40.722981   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:00:42.914786   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:00:42.915043   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:00:42.915043   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:00:45.520993   13752 main.go:141] libmachine: [stdout =====>] : 
	I0612 15:00:45.521215   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:00:46.528435   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:00:48.777859   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:00:48.778063   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:00:48.778106   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:00:51.337551   13752 main.go:141] libmachine: [stdout =====>] : 
	I0612 15:00:51.337551   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:00:52.343181   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:00:54.597726   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:00:54.597726   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:00:54.597906   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:00:57.129606   13752 main.go:141] libmachine: [stdout =====>] : 
	I0612 15:00:57.129606   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:00:58.129819   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:01:00.392349   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:01:00.392349   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:00.392645   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:01:03.007334   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:01:03.007334   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:03.010991   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:01:05.196721   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:01:05.196721   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:05.197433   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:01:07.796762   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:01:07.796762   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:07.798026   13752 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\config.json ...
	I0612 15:01:07.800838   13752 machine.go:94] provisionDockerMachine start ...
	I0612 15:01:07.800923   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:01:09.972772   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:01:09.972772   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:09.972772   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:01:12.493479   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:01:12.493479   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:12.512050   13752 main.go:141] libmachine: Using SSH client type: native
	I0612 15:01:12.513615   13752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.200.184 22 <nil> <nil>}
	I0612 15:01:12.513615   13752 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 15:01:12.644231   13752 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 15:01:12.644377   13752 buildroot.go:166] provisioning hostname "multinode-025000"
	I0612 15:01:12.644497   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:01:14.791166   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:01:14.791166   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:14.802459   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:01:17.325104   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:01:17.325104   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:17.342028   13752 main.go:141] libmachine: Using SSH client type: native
	I0612 15:01:17.342727   13752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.200.184 22 <nil> <nil>}
	I0612 15:01:17.342727   13752 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-025000 && echo "multinode-025000" | sudo tee /etc/hostname
	I0612 15:01:17.496769   13752 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-025000
	
	I0612 15:01:17.496769   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:01:19.612891   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:01:19.625233   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:19.625468   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:01:22.136802   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:01:22.136802   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:22.156209   13752 main.go:141] libmachine: Using SSH client type: native
	I0612 15:01:22.156209   13752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.200.184 22 <nil> <nil>}
	I0612 15:01:22.156853   13752 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-025000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-025000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-025000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 15:01:22.304434   13752 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 15:01:22.304582   13752 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0612 15:01:22.304682   13752 buildroot.go:174] setting up certificates
	I0612 15:01:22.304758   13752 provision.go:84] configureAuth start
	I0612 15:01:22.304929   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:01:24.475460   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:01:24.475460   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:24.475721   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:01:27.022605   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:01:27.022605   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:27.034798   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:01:29.196649   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:01:29.196649   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:29.196649   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:01:31.706410   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:01:31.706410   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:31.706410   13752 provision.go:143] copyHostCerts
	I0612 15:01:31.718445   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0612 15:01:31.718445   13752 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0612 15:01:31.718445   13752 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0612 15:01:31.719342   13752 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0612 15:01:31.720485   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0612 15:01:31.720717   13752 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0612 15:01:31.720717   13752 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0612 15:01:31.720717   13752 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0612 15:01:31.722036   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0612 15:01:31.722251   13752 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0612 15:01:31.722251   13752 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0612 15:01:31.722644   13752 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0612 15:01:31.723884   13752 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-025000 san=[127.0.0.1 172.23.200.184 localhost minikube multinode-025000]
	I0612 15:01:31.968051   13752 provision.go:177] copyRemoteCerts
	I0612 15:01:31.978531   13752 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 15:01:31.978531   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:01:34.086511   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:01:34.086511   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:34.097813   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:01:36.512714   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:01:36.512714   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:36.523572   13752 sshutil.go:53] new ssh client: &{IP:172.23.200.184 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000\id_rsa Username:docker}
	I0612 15:01:36.619849   13752 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.641302s)
	I0612 15:01:36.619849   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0612 15:01:36.619849   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 15:01:36.670157   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0612 15:01:36.670739   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0612 15:01:36.715220   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0612 15:01:36.715606   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0612 15:01:36.756743   13752 provision.go:87] duration metric: took 14.4518735s to configureAuth
	I0612 15:01:36.756743   13752 buildroot.go:189] setting minikube options for container-runtime
	I0612 15:01:36.757477   13752 config.go:182] Loaded profile config "multinode-025000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 15:01:36.757477   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:01:38.740322   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:01:38.740322   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:38.752089   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:01:41.137747   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:01:41.137747   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:41.143755   13752 main.go:141] libmachine: Using SSH client type: native
	I0612 15:01:41.144286   13752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.200.184 22 <nil> <nil>}
	I0612 15:01:41.144286   13752 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0612 15:01:41.270398   13752 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0612 15:01:41.270398   13752 buildroot.go:70] root file system type: tmpfs
	I0612 15:01:41.270605   13752 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0612 15:01:41.270759   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:01:43.290625   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:01:43.290625   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:43.301117   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:01:45.720532   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:01:45.731356   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:45.737949   13752 main.go:141] libmachine: Using SSH client type: native
	I0612 15:01:45.738921   13752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.200.184 22 <nil> <nil>}
	I0612 15:01:45.738921   13752 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0612 15:01:45.894484   13752 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0612 15:01:45.894703   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:01:47.921662   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:01:47.921662   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:47.922998   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:01:50.324280   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:01:50.324280   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:50.342355   13752 main.go:141] libmachine: Using SSH client type: native
	I0612 15:01:50.343153   13752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.200.184 22 <nil> <nil>}
	I0612 15:01:50.343153   13752 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0612 15:01:52.774992   13752 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0612 15:01:52.775052   13752 machine.go:97] duration metric: took 44.9740052s to provisionDockerMachine
	I0612 15:01:52.775088   13752 start.go:293] postStartSetup for "multinode-025000" (driver="hyperv")
	I0612 15:01:52.775127   13752 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 15:01:52.787609   13752 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 15:01:52.787609   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:01:54.799297   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:01:54.799297   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:54.799624   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:01:57.202331   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:01:57.202331   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:57.213066   13752 sshutil.go:53] new ssh client: &{IP:172.23.200.184 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000\id_rsa Username:docker}
	I0612 15:01:57.314533   13752 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5119571s)
	I0612 15:01:57.330091   13752 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 15:01:57.336815   13752 command_runner.go:130] > NAME=Buildroot
	I0612 15:01:57.336815   13752 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0612 15:01:57.336815   13752 command_runner.go:130] > ID=buildroot
	I0612 15:01:57.336815   13752 command_runner.go:130] > VERSION_ID=2023.02.9
	I0612 15:01:57.336815   13752 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0612 15:01:57.336924   13752 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 15:01:57.337014   13752 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0612 15:01:57.337050   13752 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0612 15:01:57.338266   13752 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> 12802.pem in /etc/ssl/certs
	I0612 15:01:57.338338   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> /etc/ssl/certs/12802.pem
	I0612 15:01:57.351008   13752 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 15:01:57.367855   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem --> /etc/ssl/certs/12802.pem (1708 bytes)
	I0612 15:01:57.410782   13752 start.go:296] duration metric: took 4.6356787s for postStartSetup
	I0612 15:01:57.410973   13752 fix.go:56] duration metric: took 1m28.3202151s for fixHost
	I0612 15:01:57.411094   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:01:59.432296   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:01:59.432296   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:01:59.432296   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:02:01.799333   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:02:01.809414   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:02:01.814747   13752 main.go:141] libmachine: Using SSH client type: native
	I0612 15:02:01.815504   13752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.200.184 22 <nil> <nil>}
	I0612 15:02:01.815504   13752 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 15:02:01.944249   13752 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718229721.947015209
	
	I0612 15:02:01.944249   13752 fix.go:216] guest clock: 1718229721.947015209
	I0612 15:02:01.944421   13752 fix.go:229] Guest: 2024-06-12 15:02:01.947015209 -0700 PDT Remote: 2024-06-12 15:01:57.4109735 -0700 PDT m=+94.474017001 (delta=4.536041709s)
	I0612 15:02:01.944421   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:02:03.903036   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:02:03.903036   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:02:03.915082   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:02:06.269784   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:02:06.269784   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:02:06.286721   13752 main.go:141] libmachine: Using SSH client type: native
	I0612 15:02:06.286898   13752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.200.184 22 <nil> <nil>}
	I0612 15:02:06.286898   13752 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1718229721
	I0612 15:02:06.425776   13752 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jun 12 22:02:01 UTC 2024
	
	I0612 15:02:06.425831   13752 fix.go:236] clock set: Wed Jun 12 22:02:01 UTC 2024
	 (err=<nil>)
	I0612 15:02:06.425831   13752 start.go:83] releasing machines lock for "multinode-025000", held for 1m37.3353038s
	I0612 15:02:06.425890   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:02:08.402828   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:02:08.402828   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:02:08.413902   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:02:10.763921   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:02:10.763921   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:02:10.780104   13752 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 15:02:10.780211   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:02:10.789901   13752 ssh_runner.go:195] Run: cat /version.json
	I0612 15:02:10.789901   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:02:12.871224   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:02:12.872396   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:02:12.871224   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:02:12.873520   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:02:12.874029   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:02:12.874158   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:02:15.442493   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:02:15.453605   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:02:15.453876   13752 sshutil.go:53] new ssh client: &{IP:172.23.200.184 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000\id_rsa Username:docker}
	I0612 15:02:15.474546   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:02:15.474546   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:02:15.474546   13752 sshutil.go:53] new ssh client: &{IP:172.23.200.184 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000\id_rsa Username:docker}
	I0612 15:02:15.537603   13752 command_runner.go:130] > {"iso_version": "v1.33.1-1718047936-19044", "kicbase_version": "v0.0.44-1718016726-19044", "minikube_version": "v1.33.1", "commit": "8a07c05cb41cba41fd6bf6981cdae9c899c82330"}
	I0612 15:02:15.537603   13752 ssh_runner.go:235] Completed: cat /version.json: (4.7476861s)
	I0612 15:02:15.551982   13752 ssh_runner.go:195] Run: systemctl --version
	I0612 15:02:15.612728   13752 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0612 15:02:15.613778   13752 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8325003s)
	I0612 15:02:15.613778   13752 command_runner.go:130] > systemd 252 (252)
	I0612 15:02:15.613857   13752 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0612 15:02:15.626624   13752 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0612 15:02:15.632192   13752 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0612 15:02:15.635709   13752 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 15:02:15.646874   13752 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 15:02:15.675249   13752 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0612 15:02:15.675249   13752 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 15:02:15.675249   13752 start.go:494] detecting cgroup driver to use...
	I0612 15:02:15.675556   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 15:02:15.704025   13752 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0612 15:02:15.717565   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0612 15:02:15.751472   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0612 15:02:15.770467   13752 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0612 15:02:15.783584   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0612 15:02:15.814866   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0612 15:02:15.849186   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0612 15:02:15.882284   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0612 15:02:15.914250   13752 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 15:02:15.945545   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0612 15:02:15.975663   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0612 15:02:16.008244   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0612 15:02:16.038893   13752 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 15:02:16.041397   13752 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0612 15:02:16.067860   13752 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 15:02:16.100254   13752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 15:02:16.277337   13752 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0612 15:02:16.306088   13752 start.go:494] detecting cgroup driver to use...
	I0612 15:02:16.321276   13752 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0612 15:02:16.345005   13752 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0612 15:02:16.345005   13752 command_runner.go:130] > [Unit]
	I0612 15:02:16.345005   13752 command_runner.go:130] > Description=Docker Application Container Engine
	I0612 15:02:16.345111   13752 command_runner.go:130] > Documentation=https://docs.docker.com
	I0612 15:02:16.345111   13752 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0612 15:02:16.345111   13752 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0612 15:02:16.345111   13752 command_runner.go:130] > StartLimitBurst=3
	I0612 15:02:16.345111   13752 command_runner.go:130] > StartLimitIntervalSec=60
	I0612 15:02:16.345111   13752 command_runner.go:130] > [Service]
	I0612 15:02:16.345111   13752 command_runner.go:130] > Type=notify
	I0612 15:02:16.345111   13752 command_runner.go:130] > Restart=on-failure
	I0612 15:02:16.345111   13752 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0612 15:02:16.345111   13752 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0612 15:02:16.345111   13752 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0612 15:02:16.345228   13752 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0612 15:02:16.345228   13752 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0612 15:02:16.345228   13752 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0612 15:02:16.345228   13752 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0612 15:02:16.345228   13752 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0612 15:02:16.345228   13752 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0612 15:02:16.345454   13752 command_runner.go:130] > ExecStart=
	I0612 15:02:16.345454   13752 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0612 15:02:16.345454   13752 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0612 15:02:16.345454   13752 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0612 15:02:16.345454   13752 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0612 15:02:16.345454   13752 command_runner.go:130] > LimitNOFILE=infinity
	I0612 15:02:16.345582   13752 command_runner.go:130] > LimitNPROC=infinity
	I0612 15:02:16.345582   13752 command_runner.go:130] > LimitCORE=infinity
	I0612 15:02:16.345582   13752 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0612 15:02:16.345582   13752 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0612 15:02:16.345582   13752 command_runner.go:130] > TasksMax=infinity
	I0612 15:02:16.345582   13752 command_runner.go:130] > TimeoutStartSec=0
	I0612 15:02:16.345582   13752 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0612 15:02:16.345582   13752 command_runner.go:130] > Delegate=yes
	I0612 15:02:16.345582   13752 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0612 15:02:16.345582   13752 command_runner.go:130] > KillMode=process
	I0612 15:02:16.345582   13752 command_runner.go:130] > [Install]
	I0612 15:02:16.345700   13752 command_runner.go:130] > WantedBy=multi-user.target
	I0612 15:02:16.357632   13752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 15:02:16.388628   13752 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 15:02:16.433269   13752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 15:02:16.468774   13752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0612 15:02:16.502987   13752 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0612 15:02:16.562283   13752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0612 15:02:16.586138   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 15:02:16.616419   13752 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0612 15:02:16.629391   13752 ssh_runner.go:195] Run: which cri-dockerd
	I0612 15:02:16.635116   13752 command_runner.go:130] > /usr/bin/cri-dockerd
	I0612 15:02:16.645833   13752 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0612 15:02:16.664229   13752 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0612 15:02:16.704572   13752 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0612 15:02:16.870352   13752 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0612 15:02:17.038400   13752 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0612 15:02:17.038728   13752 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0612 15:02:17.089182   13752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 15:02:17.266251   13752 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0612 15:02:19.887314   13752 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6210085s)
	I0612 15:02:19.899055   13752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0612 15:02:19.939579   13752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0612 15:02:19.981164   13752 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0612 15:02:20.173450   13752 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0612 15:02:20.348512   13752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 15:02:20.517574   13752 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0612 15:02:20.560540   13752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0612 15:02:20.594984   13752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 15:02:20.770037   13752 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0612 15:02:20.872956   13752 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0612 15:02:20.886221   13752 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0612 15:02:20.895051   13752 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0612 15:02:20.895111   13752 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0612 15:02:20.895187   13752 command_runner.go:130] > Device: 0,22	Inode: 849         Links: 1
	I0612 15:02:20.895187   13752 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0612 15:02:20.895187   13752 command_runner.go:130] > Access: 2024-06-12 22:02:20.800595808 +0000
	I0612 15:02:20.895187   13752 command_runner.go:130] > Modify: 2024-06-12 22:02:20.800595808 +0000
	I0612 15:02:20.895244   13752 command_runner.go:130] > Change: 2024-06-12 22:02:20.803595814 +0000
	I0612 15:02:20.895244   13752 command_runner.go:130] >  Birth: -
	I0612 15:02:20.895244   13752 start.go:562] Will wait 60s for crictl version
	I0612 15:02:20.906649   13752 ssh_runner.go:195] Run: which crictl
	I0612 15:02:20.913520   13752 command_runner.go:130] > /usr/bin/crictl
	I0612 15:02:20.924518   13752 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 15:02:20.974410   13752 command_runner.go:130] > Version:  0.1.0
	I0612 15:02:20.974463   13752 command_runner.go:130] > RuntimeName:  docker
	I0612 15:02:20.974463   13752 command_runner.go:130] > RuntimeVersion:  26.1.4
	I0612 15:02:20.974523   13752 command_runner.go:130] > RuntimeApiVersion:  v1
	I0612 15:02:20.974633   13752 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0612 15:02:20.985231   13752 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0612 15:02:21.014499   13752 command_runner.go:130] > 26.1.4
	I0612 15:02:21.025082   13752 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0612 15:02:21.056249   13752 command_runner.go:130] > 26.1.4
	I0612 15:02:21.062089   13752 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0612 15:02:21.062184   13752 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0612 15:02:21.066424   13752 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0612 15:02:21.066424   13752 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0612 15:02:21.066424   13752 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0612 15:02:21.066424   13752 ip.go:207] Found interface: {Index:16 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:56:a0:18 Flags:up|broadcast|multicast|running}
	I0612 15:02:21.070396   13752 ip.go:210] interface addr: fe80::52c5:dd8:dd1e:a400/64
	I0612 15:02:21.070436   13752 ip.go:210] interface addr: 172.23.192.1/20
	I0612 15:02:21.090525   13752 ssh_runner.go:195] Run: grep 172.23.192.1	host.minikube.internal$ /etc/hosts
	I0612 15:02:21.092788   13752 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 15:02:21.117548   13752 kubeadm.go:877] updating cluster {Name:multinode-025000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.1 ClusterName:multinode-025000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.200.184 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.196.105 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.23.206.72 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0612 15:02:21.117926   13752 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0612 15:02:21.126879   13752 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0612 15:02:21.149231   13752 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0612 15:02:21.149231   13752 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 15:02:21.149231   13752 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0612 15:02:21.149231   13752 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0612 15:02:21.149231   13752 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0612 15:02:21.149231   13752 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0612 15:02:21.149231   13752 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0612 15:02:21.149231   13752 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0612 15:02:21.149231   13752 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 15:02:21.149231   13752 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0612 15:02:21.150228   13752 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0612 15:02:21.150228   13752 docker.go:615] Images already preloaded, skipping extraction
	I0612 15:02:21.159820   13752 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0612 15:02:21.185401   13752 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0612 15:02:21.185401   13752 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0612 15:02:21.185401   13752 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0612 15:02:21.185401   13752 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0612 15:02:21.185401   13752 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0612 15:02:21.185401   13752 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0612 15:02:21.185401   13752 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0612 15:02:21.185401   13752 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0612 15:02:21.185401   13752 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0612 15:02:21.185401   13752 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0612 15:02:21.185401   13752 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0612 15:02:21.185401   13752 cache_images.go:84] Images are preloaded, skipping loading
	I0612 15:02:21.185401   13752 kubeadm.go:928] updating node { 172.23.200.184 8443 v1.30.1 docker true true} ...
	I0612 15:02:21.185401   13752 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-025000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.200.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-025000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 15:02:21.195657   13752 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0612 15:02:21.219723   13752 command_runner.go:130] > cgroupfs
	I0612 15:02:21.227104   13752 cni.go:84] Creating CNI manager for ""
	I0612 15:02:21.227176   13752 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0612 15:02:21.227255   13752 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0612 15:02:21.227255   13752 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.23.200.184 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-025000 NodeName:multinode-025000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.23.200.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.23.200.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0612 15:02:21.227255   13752 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.23.200.184
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-025000"
	  kubeletExtraArgs:
	    node-ip: 172.23.200.184
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.23.200.184"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0612 15:02:21.238572   13752 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 15:02:21.258967   13752 command_runner.go:130] > kubeadm
	I0612 15:02:21.259090   13752 command_runner.go:130] > kubectl
	I0612 15:02:21.259090   13752 command_runner.go:130] > kubelet
	I0612 15:02:21.259090   13752 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 15:02:21.269264   13752 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0612 15:02:21.290390   13752 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0612 15:02:21.319770   13752 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 15:02:21.348775   13752 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0612 15:02:21.388388   13752 ssh_runner.go:195] Run: grep 172.23.200.184	control-plane.minikube.internal$ /etc/hosts
	I0612 15:02:21.392093   13752 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.200.184	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 15:02:21.424361   13752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 15:02:21.598889   13752 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 15:02:21.628001   13752 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000 for IP: 172.23.200.184
	I0612 15:02:21.628001   13752 certs.go:194] generating shared ca certs ...
	I0612 15:02:21.628160   13752 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 15:02:21.628878   13752 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0612 15:02:21.629121   13752 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0612 15:02:21.629121   13752 certs.go:256] generating profile certs ...
	I0612 15:02:21.630240   13752 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\client.key
	I0612 15:02:21.630240   13752 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.key.dac33de1
	I0612 15:02:21.630240   13752 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.crt.dac33de1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.23.200.184]
	I0612 15:02:21.786227   13752 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.crt.dac33de1 ...
	I0612 15:02:21.786227   13752 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.crt.dac33de1: {Name:mk0970a1a7df551c6e9312560c14ab64a80c5ab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 15:02:21.793525   13752 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.key.dac33de1 ...
	I0612 15:02:21.793525   13752 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.key.dac33de1: {Name:mk4749182fd801b252e332471089f28320779661 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 15:02:21.795038   13752 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.crt.dac33de1 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.crt
	I0612 15:02:21.807459   13752 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.key.dac33de1 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.key
	I0612 15:02:21.808767   13752 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\proxy-client.key
	I0612 15:02:21.808767   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0612 15:02:21.810003   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0612 15:02:21.810003   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0612 15:02:21.810359   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0612 15:02:21.810359   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0612 15:02:21.810359   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0612 15:02:21.810359   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0612 15:02:21.811045   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0612 15:02:21.811292   13752 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem (1338 bytes)
	W0612 15:02:21.812110   13752 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280_empty.pem, impossibly tiny 0 bytes
	I0612 15:02:21.812305   13752 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0612 15:02:21.812578   13752 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0612 15:02:21.813151   13752 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0612 15:02:21.813456   13752 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0612 15:02:21.813905   13752 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem (1708 bytes)
	I0612 15:02:21.813905   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem -> /usr/share/ca-certificates/1280.pem
	I0612 15:02:21.814578   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> /usr/share/ca-certificates/12802.pem
	I0612 15:02:21.814880   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0612 15:02:21.815135   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 15:02:21.862681   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 15:02:21.910350   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 15:02:21.961376   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0612 15:02:22.001691   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0612 15:02:22.052317   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0612 15:02:22.094125   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0612 15:02:22.148089   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0612 15:02:22.194034   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem --> /usr/share/ca-certificates/1280.pem (1338 bytes)
	I0612 15:02:22.233292   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem --> /usr/share/ca-certificates/12802.pem (1708 bytes)
	I0612 15:02:22.289534   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 15:02:22.334222   13752 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0612 15:02:22.377356   13752 ssh_runner.go:195] Run: openssl version
	I0612 15:02:22.385877   13752 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0612 15:02:22.398163   13752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1280.pem && ln -fs /usr/share/ca-certificates/1280.pem /etc/ssl/certs/1280.pem"
	I0612 15:02:22.433853   13752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1280.pem
	I0612 15:02:22.441126   13752 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 12 20:15 /usr/share/ca-certificates/1280.pem
	I0612 15:02:22.441264   13752 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:15 /usr/share/ca-certificates/1280.pem
	I0612 15:02:22.451480   13752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1280.pem
	I0612 15:02:22.455048   13752 command_runner.go:130] > 51391683
	I0612 15:02:22.471286   13752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1280.pem /etc/ssl/certs/51391683.0"
	I0612 15:02:22.500977   13752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12802.pem && ln -fs /usr/share/ca-certificates/12802.pem /etc/ssl/certs/12802.pem"
	I0612 15:02:22.530484   13752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12802.pem
	I0612 15:02:22.539178   13752 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 12 20:15 /usr/share/ca-certificates/12802.pem
	I0612 15:02:22.539417   13752 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:15 /usr/share/ca-certificates/12802.pem
	I0612 15:02:22.550319   13752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12802.pem
	I0612 15:02:22.558360   13752 command_runner.go:130] > 3ec20f2e
	I0612 15:02:22.569385   13752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12802.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 15:02:22.599508   13752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 15:02:22.628984   13752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 15:02:22.636280   13752 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 12 20:00 /usr/share/ca-certificates/minikubeCA.pem
	I0612 15:02:22.636280   13752 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:00 /usr/share/ca-certificates/minikubeCA.pem
	I0612 15:02:22.646032   13752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 15:02:22.648698   13752 command_runner.go:130] > b5213941
	I0612 15:02:22.665790   13752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 15:02:22.696515   13752 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 15:02:22.705902   13752 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 15:02:22.705980   13752 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0612 15:02:22.706026   13752 command_runner.go:130] > Device: 8,1	Inode: 3149138     Links: 1
	I0612 15:02:22.706026   13752 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0612 15:02:22.706026   13752 command_runner.go:130] > Access: 2024-06-12 21:39:19.572401955 +0000
	I0612 15:02:22.706086   13752 command_runner.go:130] > Modify: 2024-06-12 21:39:19.572401955 +0000
	I0612 15:02:22.706086   13752 command_runner.go:130] > Change: 2024-06-12 21:39:19.572401955 +0000
	I0612 15:02:22.706086   13752 command_runner.go:130] >  Birth: 2024-06-12 21:39:19.572401955 +0000
	I0612 15:02:22.719217   13752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0612 15:02:22.728547   13752 command_runner.go:130] > Certificate will not expire
	I0612 15:02:22.740117   13752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0612 15:02:22.751561   13752 command_runner.go:130] > Certificate will not expire
	I0612 15:02:22.763163   13752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0612 15:02:22.766461   13752 command_runner.go:130] > Certificate will not expire
	I0612 15:02:22.787627   13752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0612 15:02:22.797637   13752 command_runner.go:130] > Certificate will not expire
	I0612 15:02:22.811117   13752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0612 15:02:22.819611   13752 command_runner.go:130] > Certificate will not expire
	I0612 15:02:22.830384   13752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0612 15:02:22.840557   13752 command_runner.go:130] > Certificate will not expire
	I0612 15:02:22.840843   13752 kubeadm.go:391] StartCluster: {Name:multinode-025000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:multinode-025000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.200.184 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.196.105 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.23.206.72 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 15:02:22.848634   13752 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0612 15:02:22.882807   13752 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0612 15:02:22.901123   13752 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0612 15:02:22.901123   13752 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0612 15:02:22.901178   13752 command_runner.go:130] > /var/lib/minikube/etcd:
	I0612 15:02:22.901178   13752 command_runner.go:130] > member
	W0612 15:02:22.901233   13752 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0612 15:02:22.901306   13752 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0612 15:02:22.901378   13752 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0612 15:02:22.912427   13752 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0612 15:02:22.930393   13752 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0612 15:02:22.931076   13752 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-025000" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 15:02:22.932207   13752 kubeconfig.go:62] C:\Users\jenkins.minikube1\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-025000" cluster setting kubeconfig missing "multinode-025000" context setting]
	I0612 15:02:22.932969   13752 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 15:02:22.948491   13752 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 15:02:22.949398   13752 kapi.go:59] client config for multinode-025000: &rest.Config{Host:"https://172.23.200.184:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-025000/client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-025000/client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x288e1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0612 15:02:22.951071   13752 cert_rotation.go:137] Starting client certificate rotation controller
	I0612 15:02:22.961610   13752 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0612 15:02:22.981445   13752 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0612 15:02:22.981445   13752 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0612 15:02:22.981445   13752 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0612 15:02:22.981445   13752 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0612 15:02:22.981445   13752 command_runner.go:130] >  kind: InitConfiguration
	I0612 15:02:22.981599   13752 command_runner.go:130] >  localAPIEndpoint:
	I0612 15:02:22.981599   13752 command_runner.go:130] > -  advertiseAddress: 172.23.198.154
	I0612 15:02:22.981599   13752 command_runner.go:130] > +  advertiseAddress: 172.23.200.184
	I0612 15:02:22.981599   13752 command_runner.go:130] >    bindPort: 8443
	I0612 15:02:22.981599   13752 command_runner.go:130] >  bootstrapTokens:
	I0612 15:02:22.981599   13752 command_runner.go:130] >    - groups:
	I0612 15:02:22.981599   13752 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0612 15:02:22.981599   13752 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0612 15:02:22.981599   13752 command_runner.go:130] >    name: "multinode-025000"
	I0612 15:02:22.981755   13752 command_runner.go:130] >    kubeletExtraArgs:
	I0612 15:02:22.981755   13752 command_runner.go:130] > -    node-ip: 172.23.198.154
	I0612 15:02:22.981755   13752 command_runner.go:130] > +    node-ip: 172.23.200.184
	I0612 15:02:22.981755   13752 command_runner.go:130] >    taints: []
	I0612 15:02:22.981812   13752 command_runner.go:130] >  ---
	I0612 15:02:22.981812   13752 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0612 15:02:22.981812   13752 command_runner.go:130] >  kind: ClusterConfiguration
	I0612 15:02:22.981812   13752 command_runner.go:130] >  apiServer:
	I0612 15:02:22.981812   13752 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.23.198.154"]
	I0612 15:02:22.981812   13752 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.23.200.184"]
	I0612 15:02:22.981887   13752 command_runner.go:130] >    extraArgs:
	I0612 15:02:22.981887   13752 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0612 15:02:22.981887   13752 command_runner.go:130] >  controllerManager:
	I0612 15:02:22.981977   13752 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.23.198.154
	+  advertiseAddress: 172.23.200.184
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-025000"
	   kubeletExtraArgs:
	-    node-ip: 172.23.198.154
	+    node-ip: 172.23.200.184
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.23.198.154"]
	+  certSANs: ["127.0.0.1", "localhost", "172.23.200.184"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0612 15:02:22.982063   13752 kubeadm.go:1154] stopping kube-system containers ...
	I0612 15:02:22.990351   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0612 15:02:23.017079   13752 command_runner.go:130] > e83cf4eef49e
	I0612 15:02:23.018014   13752 command_runner.go:130] > 61910369e0d4
	I0612 15:02:23.018014   13752 command_runner.go:130] > 5b9e051df484
	I0612 15:02:23.018014   13752 command_runner.go:130] > 894c58e9fe75
	I0612 15:02:23.018014   13752 command_runner.go:130] > 4d60d82f6bc5
	I0612 15:02:23.018014   13752 command_runner.go:130] > c4842faba751
	I0612 15:02:23.018014   13752 command_runner.go:130] > fad98f611536
	I0612 15:02:23.018014   13752 command_runner.go:130] > 92f2d5f19e95
	I0612 15:02:23.018014   13752 command_runner.go:130] > 6b021c195669
	I0612 15:02:23.018014   13752 command_runner.go:130] > 2455f315465b
	I0612 15:02:23.018014   13752 command_runner.go:130] > 685d167da53c
	I0612 15:02:23.018122   13752 command_runner.go:130] > 0749f44d0356
	I0612 15:02:23.018122   13752 command_runner.go:130] > 2784305b1d5e
	I0612 15:02:23.018122   13752 command_runner.go:130] > 40443305b24f
	I0612 15:02:23.018122   13752 command_runner.go:130] > d9933fdc9ca7
	I0612 15:02:23.018122   13752 command_runner.go:130] > bb4351fab502
	I0612 15:02:23.018122   13752 docker.go:483] Stopping containers: [e83cf4eef49e 61910369e0d4 5b9e051df484 894c58e9fe75 4d60d82f6bc5 c4842faba751 fad98f611536 92f2d5f19e95 6b021c195669 2455f315465b 685d167da53c 0749f44d0356 2784305b1d5e 40443305b24f d9933fdc9ca7 bb4351fab502]
	I0612 15:02:23.027403   13752 ssh_runner.go:195] Run: docker stop e83cf4eef49e 61910369e0d4 5b9e051df484 894c58e9fe75 4d60d82f6bc5 c4842faba751 fad98f611536 92f2d5f19e95 6b021c195669 2455f315465b 685d167da53c 0749f44d0356 2784305b1d5e 40443305b24f d9933fdc9ca7 bb4351fab502
	I0612 15:02:23.056576   13752 command_runner.go:130] > e83cf4eef49e
	I0612 15:02:23.056576   13752 command_runner.go:130] > 61910369e0d4
	I0612 15:02:23.056576   13752 command_runner.go:130] > 5b9e051df484
	I0612 15:02:23.056576   13752 command_runner.go:130] > 894c58e9fe75
	I0612 15:02:23.056576   13752 command_runner.go:130] > 4d60d82f6bc5
	I0612 15:02:23.056576   13752 command_runner.go:130] > c4842faba751
	I0612 15:02:23.056576   13752 command_runner.go:130] > fad98f611536
	I0612 15:02:23.056665   13752 command_runner.go:130] > 92f2d5f19e95
	I0612 15:02:23.056665   13752 command_runner.go:130] > 6b021c195669
	I0612 15:02:23.056665   13752 command_runner.go:130] > 2455f315465b
	I0612 15:02:23.056665   13752 command_runner.go:130] > 685d167da53c
	I0612 15:02:23.056665   13752 command_runner.go:130] > 0749f44d0356
	I0612 15:02:23.056665   13752 command_runner.go:130] > 2784305b1d5e
	I0612 15:02:23.056665   13752 command_runner.go:130] > 40443305b24f
	I0612 15:02:23.056665   13752 command_runner.go:130] > d9933fdc9ca7
	I0612 15:02:23.056665   13752 command_runner.go:130] > bb4351fab502
	I0612 15:02:23.067475   13752 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0612 15:02:23.108441   13752 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0612 15:02:23.126824   13752 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0612 15:02:23.126824   13752 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0612 15:02:23.127691   13752 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0612 15:02:23.127756   13752 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 15:02:23.128040   13752 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0612 15:02:23.128102   13752 kubeadm.go:156] found existing configuration files:
	
	I0612 15:02:23.139648   13752 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0612 15:02:23.142582   13752 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 15:02:23.156364   13752 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0612 15:02:23.168231   13752 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0612 15:02:23.196226   13752 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0612 15:02:23.199394   13752 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 15:02:23.212511   13752 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0612 15:02:23.223902   13752 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0612 15:02:23.253475   13752 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0612 15:02:23.255184   13752 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 15:02:23.270103   13752 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0612 15:02:23.281449   13752 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0612 15:02:23.309342   13752 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0612 15:02:23.319462   13752 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 15:02:23.325131   13752 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0612 15:02:23.337594   13752 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0612 15:02:23.366068   13752 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0612 15:02:23.384106   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 15:02:23.682735   13752 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0612 15:02:23.684277   13752 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0612 15:02:23.684277   13752 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0612 15:02:23.684277   13752 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0612 15:02:23.684277   13752 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0612 15:02:23.684277   13752 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0612 15:02:23.684277   13752 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0612 15:02:23.684277   13752 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0612 15:02:23.684277   13752 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0612 15:02:23.684277   13752 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0612 15:02:23.684474   13752 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0612 15:02:23.684474   13752 command_runner.go:130] > [certs] Using the existing "sa" key
	I0612 15:02:23.684474   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 15:02:25.071287   13752 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0612 15:02:25.072170   13752 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0612 15:02:25.072170   13752 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0612 15:02:25.072170   13752 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0612 15:02:25.072241   13752 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0612 15:02:25.072241   13752 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0612 15:02:25.072241   13752 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.3877624s)
	I0612 15:02:25.072370   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0612 15:02:25.330905   13752 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 15:02:25.330976   13752 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 15:02:25.330976   13752 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0612 15:02:25.331087   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 15:02:25.419961   13752 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0612 15:02:25.420052   13752 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0612 15:02:25.420052   13752 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0612 15:02:25.420119   13752 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0612 15:02:25.420119   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0612 15:02:25.526305   13752 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0612 15:02:25.526535   13752 api_server.go:52] waiting for apiserver process to appear ...
	I0612 15:02:25.541441   13752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 15:02:26.054630   13752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 15:02:26.553329   13752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 15:02:27.054764   13752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 15:02:27.539054   13752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 15:02:27.568811   13752 command_runner.go:130] > 1830
	I0612 15:02:27.568941   13752 api_server.go:72] duration metric: took 2.0426292s to wait for apiserver process to appear ...
	I0612 15:02:27.568980   13752 api_server.go:88] waiting for apiserver healthz status ...
	I0612 15:02:27.569016   13752 api_server.go:253] Checking apiserver healthz at https://172.23.200.184:8443/healthz ...
	I0612 15:02:30.955519   13752 api_server.go:279] https://172.23.200.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0612 15:02:30.955519   13752 api_server.go:103] status: https://172.23.200.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0612 15:02:30.956234   13752 api_server.go:253] Checking apiserver healthz at https://172.23.200.184:8443/healthz ...
	I0612 15:02:30.985178   13752 api_server.go:279] https://172.23.200.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0612 15:02:30.986074   13752 api_server.go:103] status: https://172.23.200.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0612 15:02:31.077288   13752 api_server.go:253] Checking apiserver healthz at https://172.23.200.184:8443/healthz ...
	I0612 15:02:31.086447   13752 api_server.go:279] https://172.23.200.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 15:02:31.086491   13752 api_server.go:103] status: https://172.23.200.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 15:02:31.583106   13752 api_server.go:253] Checking apiserver healthz at https://172.23.200.184:8443/healthz ...
	I0612 15:02:31.595406   13752 api_server.go:279] https://172.23.200.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 15:02:31.595491   13752 api_server.go:103] status: https://172.23.200.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 15:02:32.074113   13752 api_server.go:253] Checking apiserver healthz at https://172.23.200.184:8443/healthz ...
	I0612 15:02:32.082132   13752 api_server.go:279] https://172.23.200.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0612 15:02:32.082237   13752 api_server.go:103] status: https://172.23.200.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0612 15:02:32.580946   13752 api_server.go:253] Checking apiserver healthz at https://172.23.200.184:8443/healthz ...
	I0612 15:02:32.591357   13752 api_server.go:279] https://172.23.200.184:8443/healthz returned 200:
	ok
	I0612 15:02:32.591357   13752 round_trippers.go:463] GET https://172.23.200.184:8443/version
	I0612 15:02:32.591886   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:32.591886   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:32.591886   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:32.604444   13752 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0612 15:02:32.604444   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:32.604444   13752 round_trippers.go:580]     Content-Length: 263
	I0612 15:02:32.604444   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:32 GMT
	I0612 15:02:32.605361   13752 round_trippers.go:580]     Audit-Id: a9cb0e97-447e-4cdb-98d9-169c85c1c86e
	I0612 15:02:32.605361   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:32.605361   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:32.605361   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:32.605361   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:32.605361   13752 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0612 15:02:32.605361   13752 api_server.go:141] control plane version: v1.30.1
	I0612 15:02:32.605361   13752 api_server.go:131] duration metric: took 5.0363644s to wait for apiserver health ...
	I0612 15:02:32.605361   13752 cni.go:84] Creating CNI manager for ""
	I0612 15:02:32.605361   13752 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0612 15:02:32.608697   13752 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0612 15:02:32.620615   13752 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0612 15:02:32.632215   13752 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0612 15:02:32.632288   13752 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0612 15:02:32.632288   13752 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0612 15:02:32.632288   13752 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0612 15:02:32.632288   13752 command_runner.go:130] > Access: 2024-06-12 22:01:00.846027700 +0000
	I0612 15:02:32.632288   13752 command_runner.go:130] > Modify: 2024-06-11 01:01:29.000000000 +0000
	I0612 15:02:32.632410   13752 command_runner.go:130] > Change: 2024-06-12 15:00:50.948000000 +0000
	I0612 15:02:32.632410   13752 command_runner.go:130] >  Birth: -
	I0612 15:02:32.632535   13752 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0612 15:02:32.632535   13752 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0612 15:02:32.698218   13752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0612 15:02:33.730843   13752 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0612 15:02:33.730843   13752 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0612 15:02:33.730965   13752 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0612 15:02:33.730965   13752 command_runner.go:130] > daemonset.apps/kindnet configured
	I0612 15:02:33.731006   13752 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.0327847s)
	I0612 15:02:33.731075   13752 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 15:02:33.731132   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods
	I0612 15:02:33.731132   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:33.731132   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:33.731132   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:33.740158   13752 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0612 15:02:33.741945   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:33.742002   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:33.742002   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:33.742034   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:33.742034   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:33 GMT
	I0612 15:02:33.742034   13752 round_trippers.go:580]     Audit-Id: 5839a38a-4275-42e5-a4af-5068719c0c68
	I0612 15:02:33.742034   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:33.744890   13752 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1790"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87778 chars]
	I0612 15:02:33.753330   13752 system_pods.go:59] 12 kube-system pods found
	I0612 15:02:33.753422   13752 system_pods.go:61] "coredns-7db6d8ff4d-vgcxw" [c5bd143a-d39e-46af-9308-0a97bb45729c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0612 15:02:33.753456   13752 system_pods.go:61] "etcd-multinode-025000" [be41c4a6-88ce-4e08-9b7c-16c0b4f3eec2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0612 15:02:33.753456   13752 system_pods.go:61] "kindnet-8252q" [b1c2b9b3-0fd6-4393-b818-e7e823f89acc] Running
	I0612 15:02:33.753456   13752 system_pods.go:61] "kindnet-bqlg8" [1f004a05-3f5f-444b-9ac0-88f0e23da904] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0612 15:02:33.753456   13752 system_pods.go:61] "kindnet-v4cqk" [31faf6fc-5371-4f19-b71f-0a41b6dd2f79] Running
	I0612 15:02:33.753500   13752 system_pods.go:61] "kube-apiserver-multinode-025000" [63e55411-d432-4e5a-becc-fae0887fecae] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0612 15:02:33.753500   13752 system_pods.go:61] "kube-controller-manager-multinode-025000" [68c9aa4f-49ee-439c-ad51-7943e65c0085] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0612 15:02:33.753500   13752 system_pods.go:61] "kube-proxy-47lr8" [10b24fa7-8eea-4fbb-ab18-404e853aa7ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0612 15:02:33.753500   13752 system_pods.go:61] "kube-proxy-7jwdg" [643030f7-b876-4243-bacc-04205e88cc9e] Running
	I0612 15:02:33.753500   13752 system_pods.go:61] "kube-proxy-tdcdp" [b623833c-ce55-46b1-a840-99b3143adac1] Running
	I0612 15:02:33.753500   13752 system_pods.go:61] "kube-scheduler-multinode-025000" [83b272cb-1286-47d8-bcb1-a66056dff2a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0612 15:02:33.753500   13752 system_pods.go:61] "storage-provisioner" [d20f7489-1aa1-44b8-9221-4d1849884be4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0612 15:02:33.753500   13752 system_pods.go:74] duration metric: took 22.3672ms to wait for pod list to return data ...
	I0612 15:02:33.753500   13752 node_conditions.go:102] verifying NodePressure condition ...
	I0612 15:02:33.753500   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes
	I0612 15:02:33.753500   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:33.753500   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:33.753500   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:33.754196   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:33.754196   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:33.754196   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:33.754196   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:33.754196   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:33.754196   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:33 GMT
	I0612 15:02:33.754196   13752 round_trippers.go:580]     Audit-Id: 0f6b96c8-8308-4e48-9626-247692a01d6f
	I0612 15:02:33.754196   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:33.754196   13752 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1790"},"items":[{"metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15630 chars]
	I0612 15:02:33.759594   13752 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 15:02:33.759594   13752 node_conditions.go:123] node cpu capacity is 2
	I0612 15:02:33.759741   13752 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 15:02:33.759741   13752 node_conditions.go:123] node cpu capacity is 2
	I0612 15:02:33.759741   13752 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 15:02:33.759741   13752 node_conditions.go:123] node cpu capacity is 2
	I0612 15:02:33.759741   13752 node_conditions.go:105] duration metric: took 6.2418ms to run NodePressure ...
	I0612 15:02:33.759741   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0612 15:02:34.187972   13752 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0612 15:02:34.188040   13752 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0612 15:02:34.188135   13752 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0612 15:02:34.188164   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0612 15:02:34.188164   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:34.188164   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:34.188164   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:34.193817   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:34.193862   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:34.193862   13752 round_trippers.go:580]     Audit-Id: 99e1870d-e541-4341-b9c7-50f896d322cd
	I0612 15:02:34.193862   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:34.193919   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:34.193919   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:34.193919   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:34.193919   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:34 GMT
	I0612 15:02:34.195332   13752 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1796"},"items":[{"metadata":{"name":"etcd-multinode-025000","namespace":"kube-system","uid":"be41c4a6-88ce-4e08-9b7c-16c0b4f3eec2","resourceVersion":"1782","creationTimestamp":"2024-06-12T22:02:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.200.184:2379","kubernetes.io/config.hash":"7b6b5637642f3d915c0db1461c7074e6","kubernetes.io/config.mirror":"7b6b5637642f3d915c0db1461c7074e6","kubernetes.io/config.seen":"2024-06-12T22:02:25.563300686Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 30563 chars]
	I0612 15:02:34.196585   13752 kubeadm.go:733] kubelet initialised
	I0612 15:02:34.197133   13752 kubeadm.go:734] duration metric: took 8.4502ms waiting for restarted kubelet to initialise ...
	I0612 15:02:34.197133   13752 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 15:02:34.197298   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods
	I0612 15:02:34.197298   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:34.197298   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:34.197298   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:34.199703   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:02:34.199703   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:34.199703   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:34 GMT
	I0612 15:02:34.202944   13752 round_trippers.go:580]     Audit-Id: d3fb0f60-6aa8-4959-bd33-150c4513a34f
	I0612 15:02:34.202944   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:34.202944   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:34.202944   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:34.202944   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:34.207015   13752 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1796"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87185 chars]
	I0612 15:02:34.214499   13752 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace to be "Ready" ...
	I0612 15:02:34.215122   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:02:34.215122   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:34.215122   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:34.215122   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:34.215823   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:34.215823   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:34.215823   13752 round_trippers.go:580]     Audit-Id: 7ce271ef-8ae1-49bb-95c4-a3d4d2abc9ec
	I0612 15:02:34.215823   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:34.215823   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:34.215823   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:34.215823   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:34.215823   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:34 GMT
	I0612 15:02:34.219160   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:02:34.219759   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:34.219827   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:34.219827   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:34.219827   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:34.220082   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:34.223212   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:34.223212   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:34.223212   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:34.223212   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:34.223212   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:34 GMT
	I0612 15:02:34.223212   13752 round_trippers.go:580]     Audit-Id: f722ed58-6971-4345-9b46-8ca5a287bcc9
	I0612 15:02:34.223212   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:34.223511   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:34.223933   13752 pod_ready.go:97] node "multinode-025000" hosting pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000" has status "Ready":"False"
	I0612 15:02:34.224030   13752 pod_ready.go:81] duration metric: took 8.9076ms for pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace to be "Ready" ...
	E0612 15:02:34.224030   13752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-025000" hosting pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000" has status "Ready":"False"
	I0612 15:02:34.224030   13752 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:02:34.224136   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-025000
	I0612 15:02:34.224215   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:34.224247   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:34.224289   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:34.225942   13752 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 15:02:34.225942   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:34.225942   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:34.225942   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:34.225942   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:34 GMT
	I0612 15:02:34.225942   13752 round_trippers.go:580]     Audit-Id: e5ca8e60-6c70-4073-9a58-fb2e7f16d768
	I0612 15:02:34.227168   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:34.227168   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:34.227372   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-025000","namespace":"kube-system","uid":"be41c4a6-88ce-4e08-9b7c-16c0b4f3eec2","resourceVersion":"1782","creationTimestamp":"2024-06-12T22:02:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.200.184:2379","kubernetes.io/config.hash":"7b6b5637642f3d915c0db1461c7074e6","kubernetes.io/config.mirror":"7b6b5637642f3d915c0db1461c7074e6","kubernetes.io/config.seen":"2024-06-12T22:02:25.563300686Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6395 chars]
	I0612 15:02:34.227372   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:34.227372   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:34.227900   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:34.227900   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:34.228729   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:34.230321   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:34.230321   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:34.230321   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:34.230321   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:34.230321   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:34.230321   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:34 GMT
	I0612 15:02:34.230321   13752 round_trippers.go:580]     Audit-Id: dc20e2f9-b1f0-4b77-826b-a1fda5d20fcb
	I0612 15:02:34.230731   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:34.231191   13752 pod_ready.go:97] node "multinode-025000" hosting pod "etcd-multinode-025000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000" has status "Ready":"False"
	I0612 15:02:34.231236   13752 pod_ready.go:81] duration metric: took 7.206ms for pod "etcd-multinode-025000" in "kube-system" namespace to be "Ready" ...
	E0612 15:02:34.231268   13752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-025000" hosting pod "etcd-multinode-025000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000" has status "Ready":"False"
	I0612 15:02:34.231268   13752 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:02:34.231390   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-025000
	I0612 15:02:34.231390   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:34.231430   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:34.231443   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:34.231701   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:34.231701   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:34.231701   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:34.231701   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:34.231701   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:34.231701   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:34.231701   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:34 GMT
	I0612 15:02:34.231701   13752 round_trippers.go:580]     Audit-Id: 0ee8cf9b-f5f7-47c8-bd14-8445fb455245
	I0612 15:02:34.234484   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-025000","namespace":"kube-system","uid":"63e55411-d432-4e5a-becc-fae0887fecae","resourceVersion":"1781","creationTimestamp":"2024-06-12T22:02:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.200.184:8443","kubernetes.io/config.hash":"d6071cd4356268889f798790dc93ce06","kubernetes.io/config.mirror":"d6071cd4356268889f798790dc93ce06","kubernetes.io/config.seen":"2024-06-12T22:02:25.478872091Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7949 chars]
	I0612 15:02:34.235276   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:34.235276   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:34.235319   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:34.235319   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:34.237588   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:34.237588   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:34.237588   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:34.237588   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:34.237588   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:34.237588   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:34.237588   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:34 GMT
	I0612 15:02:34.237588   13752 round_trippers.go:580]     Audit-Id: 4a969dbb-56ac-46e9-b9be-37aaf45bc432
	I0612 15:02:34.237780   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:34.238155   13752 pod_ready.go:97] node "multinode-025000" hosting pod "kube-apiserver-multinode-025000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000" has status "Ready":"False"
	I0612 15:02:34.238239   13752 pod_ready.go:81] duration metric: took 6.9713ms for pod "kube-apiserver-multinode-025000" in "kube-system" namespace to be "Ready" ...
	E0612 15:02:34.238239   13752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-025000" hosting pod "kube-apiserver-multinode-025000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000" has status "Ready":"False"
	I0612 15:02:34.238239   13752 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:02:34.238357   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-025000
	I0612 15:02:34.238399   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:34.238399   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:34.238399   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:34.238629   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:34.238629   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:34.238629   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:34.238629   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:34 GMT
	I0612 15:02:34.238629   13752 round_trippers.go:580]     Audit-Id: a4666425-b1dd-4434-9cee-0f790a031a60
	I0612 15:02:34.238629   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:34.238629   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:34.241384   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:34.241750   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-025000","namespace":"kube-system","uid":"68c9aa4f-49ee-439c-ad51-7943e65c0085","resourceVersion":"1776","creationTimestamp":"2024-06-12T21:39:30Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"88de11d8b1aaec126153d44e87c4b5dd","kubernetes.io/config.mirror":"88de11d8b1aaec126153d44e87c4b5dd","kubernetes.io/config.seen":"2024-06-12T21:39:23.999674614Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0612 15:02:34.242359   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:34.242359   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:34.242359   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:34.242359   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:34.242587   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:34.245123   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:34.245123   13752 round_trippers.go:580]     Audit-Id: c17de349-28c4-4be2-b4f1-2b65d98679e3
	I0612 15:02:34.245123   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:34.245123   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:34.245123   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:34.245123   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:34.245123   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:34 GMT
	I0612 15:02:34.245257   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:34.246045   13752 pod_ready.go:97] node "multinode-025000" hosting pod "kube-controller-manager-multinode-025000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000" has status "Ready":"False"
	I0612 15:02:34.246045   13752 pod_ready.go:81] duration metric: took 7.7469ms for pod "kube-controller-manager-multinode-025000" in "kube-system" namespace to be "Ready" ...
	E0612 15:02:34.246045   13752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-025000" hosting pod "kube-controller-manager-multinode-025000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000" has status "Ready":"False"
	I0612 15:02:34.246045   13752 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-47lr8" in "kube-system" namespace to be "Ready" ...
	I0612 15:02:34.393279   13752 request.go:629] Waited for 147.0069ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-proxy-47lr8
	I0612 15:02:34.393466   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-proxy-47lr8
	I0612 15:02:34.393567   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:34.393585   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:34.393585   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:34.394318   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:34.394318   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:34.394318   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:34.394318   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:34 GMT
	I0612 15:02:34.394318   13752 round_trippers.go:580]     Audit-Id: f5c59989-bbd7-4295-8f5f-9718f26a43b5
	I0612 15:02:34.397732   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:34.397732   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:34.397732   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:34.398029   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-47lr8","generateName":"kube-proxy-","namespace":"kube-system","uid":"10b24fa7-8eea-4fbb-ab18-404e853aa7ab","resourceVersion":"1793","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b44c21bc-e2cc-415b-bc2f-616adabe0681","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b44c21bc-e2cc-415b-bc2f-616adabe0681\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0612 15:02:34.590194   13752 request.go:629] Waited for 190.796ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:34.590419   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:34.590419   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:34.590419   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:34.590419   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:34.590718   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:34.594373   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:34.594373   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:34.594373   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:34.594373   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:34 GMT
	I0612 15:02:34.594373   13752 round_trippers.go:580]     Audit-Id: b075564e-90f8-4821-a226-15e162bee9aa
	I0612 15:02:34.594373   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:34.594373   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:34.594699   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:34.595292   13752 pod_ready.go:97] node "multinode-025000" hosting pod "kube-proxy-47lr8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000" has status "Ready":"False"
	I0612 15:02:34.595365   13752 pod_ready.go:81] duration metric: took 349.3183ms for pod "kube-proxy-47lr8" in "kube-system" namespace to be "Ready" ...
	E0612 15:02:34.595365   13752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-025000" hosting pod "kube-proxy-47lr8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000" has status "Ready":"False"
	I0612 15:02:34.595365   13752 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7jwdg" in "kube-system" namespace to be "Ready" ...
	I0612 15:02:34.794840   13752 request.go:629] Waited for 199.1151ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7jwdg
	I0612 15:02:34.794967   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7jwdg
	I0612 15:02:34.794967   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:34.794967   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:34.794967   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:34.795388   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:34.795388   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:34.795388   13752 round_trippers.go:580]     Audit-Id: 1b0700c1-bfd2-44e2-a10f-884f0026a486
	I0612 15:02:34.795388   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:34.795388   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:34.795388   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:34.795388   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:34.795388   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:34 GMT
	I0612 15:02:34.798793   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7jwdg","generateName":"kube-proxy-","namespace":"kube-system","uid":"643030f7-b876-4243-bacc-04205e88cc9e","resourceVersion":"1748","creationTimestamp":"2024-06-12T21:47:16Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b44c21bc-e2cc-415b-bc2f-616adabe0681","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:47:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b44c21bc-e2cc-415b-bc2f-616adabe0681\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6062 chars]
	I0612 15:02:34.999437   13752 request.go:629] Waited for 199.9684ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m03
	I0612 15:02:34.999664   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m03
	I0612 15:02:34.999960   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:34.999960   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:34.999960   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:35.000226   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:35.000226   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:35.000226   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:35.000226   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:35.000226   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:35 GMT
	I0612 15:02:35.000226   13752 round_trippers.go:580]     Audit-Id: 1da4a651-be9d-4d48-b392-d674e54a35f9
	I0612 15:02:35.000226   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:35.000226   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:35.004160   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m03","uid":"9d457bc2-c46f-4b5d-8023-5c06ef6198c7","resourceVersion":"1760","creationTimestamp":"2024-06-12T21:57:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_57_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:57:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4399 chars]
	I0612 15:02:35.004640   13752 pod_ready.go:97] node "multinode-025000-m03" hosting pod "kube-proxy-7jwdg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000-m03" has status "Ready":"Unknown"
	I0612 15:02:35.004640   13752 pod_ready.go:81] duration metric: took 409.2736ms for pod "kube-proxy-7jwdg" in "kube-system" namespace to be "Ready" ...
	E0612 15:02:35.004640   13752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-025000-m03" hosting pod "kube-proxy-7jwdg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000-m03" has status "Ready":"Unknown"
	I0612 15:02:35.004640   13752 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tdcdp" in "kube-system" namespace to be "Ready" ...
	I0612 15:02:35.191442   13752 request.go:629] Waited for 186.4651ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tdcdp
	I0612 15:02:35.191737   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tdcdp
	I0612 15:02:35.191853   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:35.191853   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:35.191853   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:35.192136   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:35.192136   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:35.192136   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:35.192136   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:35.192136   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:35.192136   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:35.192136   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:35 GMT
	I0612 15:02:35.192136   13752 round_trippers.go:580]     Audit-Id: a48ecf18-e3a7-4b57-9825-4c50d5c19ced
	I0612 15:02:35.195522   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tdcdp","generateName":"kube-proxy-","namespace":"kube-system","uid":"b623833c-ce55-46b1-a840-99b3143adac1","resourceVersion":"637","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b44c21bc-e2cc-415b-bc2f-616adabe0681","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b44c21bc-e2cc-415b-bc2f-616adabe0681\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0612 15:02:35.403151   13752 request.go:629] Waited for 206.8154ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:02:35.403425   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:02:35.403425   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:35.403425   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:35.403425   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:35.403900   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:35.407929   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:35.407929   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:35.408032   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:35.408032   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:35.408032   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:35.408032   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:35 GMT
	I0612 15:02:35.408032   13752 round_trippers.go:580]     Audit-Id: a938f75c-91c1-42de-a957-e63452e95bac
	I0612 15:02:35.408215   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"1705","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3827 chars]
	I0612 15:02:35.408788   13752 pod_ready.go:92] pod "kube-proxy-tdcdp" in "kube-system" namespace has status "Ready":"True"
	I0612 15:02:35.408788   13752 pod_ready.go:81] duration metric: took 404.1473ms for pod "kube-proxy-tdcdp" in "kube-system" namespace to be "Ready" ...
	I0612 15:02:35.408876   13752 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:02:35.595108   13752 request.go:629] Waited for 185.8045ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-025000
	I0612 15:02:35.595108   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-025000
	I0612 15:02:35.595108   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:35.595108   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:35.595108   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:35.595640   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:35.595640   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:35.595640   13752 round_trippers.go:580]     Audit-Id: a816a944-d0b0-4787-bde1-73300e306955
	I0612 15:02:35.595640   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:35.595640   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:35.595640   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:35.595640   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:35.595640   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:35 GMT
	I0612 15:02:35.599079   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-025000","namespace":"kube-system","uid":"83b272cb-1286-47d8-bcb1-a66056dff2a5","resourceVersion":"1778","creationTimestamp":"2024-06-12T21:39:31Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"de62e7fd7d0feea82620e745032c1a67","kubernetes.io/config.mirror":"de62e7fd7d0feea82620e745032c1a67","kubernetes.io/config.seen":"2024-06-12T21:39:31.214466565Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5449 chars]
	I0612 15:02:35.792960   13752 request.go:629] Waited for 193.1829ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:35.793029   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:35.793193   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:35.793193   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:35.793193   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:35.794082   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:35.794082   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:35.794082   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:35.794082   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:35.794082   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:35 GMT
	I0612 15:02:35.794082   13752 round_trippers.go:580]     Audit-Id: 4b57cf87-db8c-4127-95ec-77335c84f0cb
	I0612 15:02:35.794082   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:35.794082   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:35.798282   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:35.798899   13752 pod_ready.go:97] node "multinode-025000" hosting pod "kube-scheduler-multinode-025000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000" has status "Ready":"False"
	I0612 15:02:35.798899   13752 pod_ready.go:81] duration metric: took 390.0212ms for pod "kube-scheduler-multinode-025000" in "kube-system" namespace to be "Ready" ...
	E0612 15:02:35.798899   13752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-025000" hosting pod "kube-scheduler-multinode-025000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000" has status "Ready":"False"
	I0612 15:02:35.798899   13752 pod_ready.go:38] duration metric: took 1.6017111s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 15:02:35.798899   13752 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0612 15:02:35.818162   13752 command_runner.go:130] > -16
	I0612 15:02:35.818243   13752 ops.go:34] apiserver oom_adj: -16
	I0612 15:02:35.818243   13752 kubeadm.go:591] duration metric: took 12.9168217s to restartPrimaryControlPlane
	I0612 15:02:35.818243   13752 kubeadm.go:393] duration metric: took 12.9773558s to StartCluster
	I0612 15:02:35.818243   13752 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 15:02:35.818470   13752 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 15:02:35.819880   13752 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 15:02:35.821386   13752 start.go:234] Will wait 6m0s for node &{Name: IP:172.23.200.184 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0612 15:02:35.825251   13752 out.go:177] * Verifying Kubernetes components...
	I0612 15:02:35.821386   13752 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0612 15:02:35.821721   13752 config.go:182] Loaded profile config "multinode-025000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 15:02:35.829956   13752 out.go:177] * Enabled addons: 
	I0612 15:02:35.832597   13752 addons.go:510] duration metric: took 11.2115ms for enable addons: enabled=[]
	I0612 15:02:35.838299   13752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 15:02:36.101038   13752 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 15:02:36.128543   13752 node_ready.go:35] waiting up to 6m0s for node "multinode-025000" to be "Ready" ...
	I0612 15:02:36.128797   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:36.128797   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:36.128895   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:36.128895   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:36.132997   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:02:36.132997   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:36.132997   13752 round_trippers.go:580]     Audit-Id: 89fffe12-9015-4f4d-97e3-025b69d22ee9
	I0612 15:02:36.132997   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:36.132997   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:36.132997   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:36.132997   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:36.132997   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:36 GMT
	I0612 15:02:36.133145   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:36.635678   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:36.635678   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:36.635678   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:36.635678   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:36.636239   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:36.640856   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:36.640856   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:36 GMT
	I0612 15:02:36.640856   13752 round_trippers.go:580]     Audit-Id: 3952db6d-bbab-4bbf-9c03-750625fb84bf
	I0612 15:02:36.640856   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:36.640856   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:36.640856   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:36.640856   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:36.640856   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:37.132613   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:37.132655   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:37.132655   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:37.132692   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:37.136199   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:02:37.136261   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:37.136334   13752 round_trippers.go:580]     Audit-Id: ff630076-938d-4400-85c7-004cc7173a13
	I0612 15:02:37.136334   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:37.136370   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:37.136370   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:37.136370   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:37.136370   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:37 GMT
	I0612 15:02:37.136501   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:37.647259   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:37.647259   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:37.647259   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:37.647259   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:37.647864   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:37.647864   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:37.647864   13752 round_trippers.go:580]     Audit-Id: 812419be-64fe-49f6-83cb-4a9a56ae3352
	I0612 15:02:37.647864   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:37.647864   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:37.647864   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:37.647864   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:37.647864   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:37 GMT
	I0612 15:02:37.650905   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:38.141961   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:38.141961   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:38.141961   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:38.141961   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:38.142456   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:38.142456   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:38.142456   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:38.142456   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:38.142456   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:38 GMT
	I0612 15:02:38.142456   13752 round_trippers.go:580]     Audit-Id: 0e354682-3445-4c89-b736-dc53748461b6
	I0612 15:02:38.142456   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:38.142456   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:38.147187   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:38.147187   13752 node_ready.go:53] node "multinode-025000" has status "Ready":"False"
	I0612 15:02:38.630171   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:38.630171   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:38.630171   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:38.630171   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:38.630940   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:38.630940   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:38.630940   13752 round_trippers.go:580]     Audit-Id: 4a11168f-1812-40ed-b5e0-5f14a097ecec
	I0612 15:02:38.630940   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:38.630940   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:38.630940   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:38.630940   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:38.630940   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:38 GMT
	I0612 15:02:38.635455   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:39.135194   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:39.135194   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:39.135194   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:39.135194   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:39.135739   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:39.135739   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:39.135739   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:39.135739   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:39 GMT
	I0612 15:02:39.135739   13752 round_trippers.go:580]     Audit-Id: 6448042b-250c-416e-85bc-e747e2aa29c3
	I0612 15:02:39.135739   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:39.135739   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:39.135739   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:39.140553   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:39.639066   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:39.639066   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:39.639066   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:39.639066   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:39.649390   13752 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0612 15:02:39.649516   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:39.649516   13752 round_trippers.go:580]     Audit-Id: 4dcf51cd-5311-4e53-a0f3-00c0524677b0
	I0612 15:02:39.649516   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:39.649516   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:39.649516   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:39.649516   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:39.649516   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:39 GMT
	I0612 15:02:39.649675   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:40.133038   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:40.133038   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:40.133038   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:40.133038   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:40.133324   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:40.133324   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:40.133324   13752 round_trippers.go:580]     Audit-Id: fbf68be7-b29a-4d0c-aa4d-21ce19c4f793
	I0612 15:02:40.133324   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:40.133324   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:40.133324   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:40.133324   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:40.133324   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:40 GMT
	I0612 15:02:40.137932   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:40.642693   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:40.642693   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:40.642693   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:40.642693   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:40.643181   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:40.643181   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:40.643181   13752 round_trippers.go:580]     Audit-Id: 68b76927-7da0-4ebf-9574-2624b0275910
	I0612 15:02:40.643181   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:40.643181   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:40.643181   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:40.643181   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:40.643181   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:40 GMT
	I0612 15:02:40.647273   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:40.647615   13752 node_ready.go:53] node "multinode-025000" has status "Ready":"False"
	I0612 15:02:41.137695   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:41.137695   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:41.137695   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:41.137695   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:41.147822   13752 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0612 15:02:41.147822   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:41.147822   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:41 GMT
	I0612 15:02:41.147822   13752 round_trippers.go:580]     Audit-Id: 3a216226-4bf9-46da-a15d-6976309e7b9b
	I0612 15:02:41.147822   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:41.147822   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:41.147822   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:41.147822   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:41.148598   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:41.633907   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:41.633907   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:41.633907   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:41.633907   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:41.634425   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:41.634425   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:41.634425   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:41.634425   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:41.634425   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:41 GMT
	I0612 15:02:41.641707   13752 round_trippers.go:580]     Audit-Id: 0239a8b4-6a11-4367-9646-7da0224b27ac
	I0612 15:02:41.641707   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:41.641707   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:41.641957   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:42.130844   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:42.131026   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:42.131026   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:42.131026   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:42.131306   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:42.131306   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:42.131306   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:42.131306   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:42.131306   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:42.131306   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:42.131306   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:42 GMT
	I0612 15:02:42.131306   13752 round_trippers.go:580]     Audit-Id: 83e98fcb-fd7b-4a20-9518-283c77f823a0
	I0612 15:02:42.135630   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:42.639185   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:42.639481   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:42.639481   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:42.639481   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:42.639726   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:42.643357   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:42.643357   13752 round_trippers.go:580]     Audit-Id: 1e2f2d66-114f-4a95-9daf-170229786432
	I0612 15:02:42.643357   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:42.643357   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:42.643357   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:42.643357   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:42.643443   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:42 GMT
	I0612 15:02:42.643996   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:43.131108   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:43.131177   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:43.131177   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:43.131177   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:43.135474   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:43.135574   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:43.135574   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:43.135574   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:43 GMT
	I0612 15:02:43.135706   13752 round_trippers.go:580]     Audit-Id: 09ab55b2-3ca9-4e46-be41-8685a43593d9
	I0612 15:02:43.135706   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:43.135706   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:43.135706   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:43.135853   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:43.135853   13752 node_ready.go:53] node "multinode-025000" has status "Ready":"False"
	I0612 15:02:43.634264   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:43.634264   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:43.634264   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:43.634264   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:43.640359   13752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 15:02:43.640359   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:43.640359   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:43.640359   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:43.640359   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:43 GMT
	I0612 15:02:43.640359   13752 round_trippers.go:580]     Audit-Id: 970fec84-f03e-43dd-8131-863be9b1c3f0
	I0612 15:02:43.640359   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:43.640359   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:43.640830   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:44.129509   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:44.129543   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:44.129592   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:44.129592   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:44.137387   13752 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 15:02:44.137387   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:44.137387   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:44.137387   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:44.137387   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:44.137387   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:44 GMT
	I0612 15:02:44.137387   13752 round_trippers.go:580]     Audit-Id: 208e9f5b-f5d7-484d-a5a5-de055d63ac5e
	I0612 15:02:44.137387   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:44.143304   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1772","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0612 15:02:44.641420   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:44.641420   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:44.641420   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:44.641420   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:44.641965   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:44.641965   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:44.641965   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:44 GMT
	I0612 15:02:44.641965   13752 round_trippers.go:580]     Audit-Id: 0807ccb1-8c7b-4fad-8d8e-a11488a690f5
	I0612 15:02:44.641965   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:44.645681   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:44.645681   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:44.645681   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:44.645942   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:45.138092   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:45.138092   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:45.138092   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:45.138092   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:45.138660   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:45.142763   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:45.142763   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:45.142763   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:45.142763   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:45.142860   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:45.142860   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:45 GMT
	I0612 15:02:45.142860   13752 round_trippers.go:580]     Audit-Id: 74090bdc-81fe-4d10-beab-310299bdab1c
	I0612 15:02:45.142929   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:45.143515   13752 node_ready.go:53] node "multinode-025000" has status "Ready":"False"
	I0612 15:02:45.645463   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:45.645463   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:45.645463   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:45.645463   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:45.646925   13752 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 15:02:45.646925   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:45.646925   13752 round_trippers.go:580]     Audit-Id: 2c03576a-b851-430e-9a2d-a3a9be3e6a8f
	I0612 15:02:45.646925   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:45.646925   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:45.648810   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:45.648810   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:45.648810   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:45 GMT
	I0612 15:02:45.649206   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:46.142845   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:46.142845   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:46.142909   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:46.142909   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:46.148125   13752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 15:02:46.148549   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:46.148549   13752 round_trippers.go:580]     Audit-Id: 5a03aa75-28ea-491d-946f-5dabe045d8a0
	I0612 15:02:46.148549   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:46.148549   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:46.148606   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:46.148606   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:46.148606   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:46 GMT
	I0612 15:02:46.148825   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:46.632705   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:46.632705   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:46.632705   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:46.632705   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:46.635221   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:02:46.635221   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:46.636373   13752 round_trippers.go:580]     Audit-Id: f2f569ba-307c-4e79-af70-e472246d5a9d
	I0612 15:02:46.636373   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:46.636373   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:46.636373   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:46.636373   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:46.636373   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:46 GMT
	I0612 15:02:46.636373   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:47.143878   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:47.143943   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:47.143943   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:47.143943   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:47.144278   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:47.148129   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:47.148129   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:47 GMT
	I0612 15:02:47.148129   13752 round_trippers.go:580]     Audit-Id: 0492a9d8-b02e-4f1f-9105-9c1179c328b1
	I0612 15:02:47.148129   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:47.148129   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:47.148129   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:47.148129   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:47.148129   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:47.148876   13752 node_ready.go:53] node "multinode-025000" has status "Ready":"False"
	I0612 15:02:47.648494   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:47.648494   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:47.648494   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:47.648494   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:47.649021   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:47.652001   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:47.652001   13752 round_trippers.go:580]     Audit-Id: fcf74f3f-743d-4726-9229-ca7c555f6e86
	I0612 15:02:47.652001   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:47.652001   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:47.652001   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:47.652001   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:47.652001   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:47 GMT
	I0612 15:02:47.652001   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:48.139572   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:48.139572   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:48.139572   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:48.139572   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:48.143679   13752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 15:02:48.143679   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:48.143679   13752 round_trippers.go:580]     Audit-Id: a43b60e4-5610-413d-acee-c6af0d4c21a4
	I0612 15:02:48.143679   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:48.143679   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:48.143679   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:48.143679   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:48.143679   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:48 GMT
	I0612 15:02:48.143679   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:48.638819   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:48.638819   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:48.638819   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:48.638819   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:48.642814   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:48.642852   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:48.642852   13752 round_trippers.go:580]     Audit-Id: e42aec87-e0a4-4f26-966d-07a90a72a008
	I0612 15:02:48.642852   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:48.642852   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:48.642852   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:48.642852   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:48.642852   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:48 GMT
	I0612 15:02:48.642852   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:49.134581   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:49.134879   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:49.134879   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:49.134879   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:49.135233   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:49.139052   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:49.139052   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:49.139052   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:49.139052   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:49.139052   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:49 GMT
	I0612 15:02:49.139052   13752 round_trippers.go:580]     Audit-Id: ec8c20ff-2ea1-4b7c-9baf-af3664a76318
	I0612 15:02:49.139052   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:49.139052   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:49.650214   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:49.650302   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:49.650302   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:49.650302   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:49.651082   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:49.654312   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:49.654312   13752 round_trippers.go:580]     Audit-Id: ce842bbc-fc75-4e5d-bd62-ce1df2837521
	I0612 15:02:49.654312   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:49.654312   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:49.654312   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:49.654312   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:49.654312   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:49 GMT
	I0612 15:02:49.654312   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:49.655281   13752 node_ready.go:53] node "multinode-025000" has status "Ready":"False"
	I0612 15:02:50.139543   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:50.139543   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:50.139543   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:50.139543   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:50.140279   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:50.140279   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:50.140279   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:50.140279   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:50.140279   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:50 GMT
	I0612 15:02:50.140279   13752 round_trippers.go:580]     Audit-Id: f5c4537b-9bfa-470c-939a-1d80375bb472
	I0612 15:02:50.140279   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:50.140279   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:50.144268   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:50.632735   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:50.632735   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:50.632835   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:50.632835   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:50.633097   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:50.633097   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:50.633097   13752 round_trippers.go:580]     Audit-Id: b17a24aa-062c-4d56-ab19-217ea2c97d68
	I0612 15:02:50.633097   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:50.637060   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:50.637060   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:50.637060   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:50.637060   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:50 GMT
	I0612 15:02:50.637404   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:51.139845   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:51.140137   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:51.140137   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:51.140137   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:51.140525   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:51.144482   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:51.144482   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:51.144482   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:51.144550   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:51 GMT
	I0612 15:02:51.144550   13752 round_trippers.go:580]     Audit-Id: 66121eeb-914c-49b0-989a-cb0ab3eea56d
	I0612 15:02:51.144550   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:51.144550   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:51.144707   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:51.641552   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:51.641552   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:51.641552   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:51.641552   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:51.647911   13752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 15:02:51.647911   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:51.647911   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:51.647911   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:51.647911   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:51.647911   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:51 GMT
	I0612 15:02:51.647911   13752 round_trippers.go:580]     Audit-Id: 4ea5d0e8-aa79-48ab-836c-8f7901a76124
	I0612 15:02:51.647911   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:51.649338   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:52.140387   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:52.140387   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:52.140387   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:52.140387   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:52.140969   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:52.140969   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:52.144528   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:52.144528   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:52.144528   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:52 GMT
	I0612 15:02:52.144528   13752 round_trippers.go:580]     Audit-Id: 6bda08b4-29f0-44e2-bd05-400d886a7037
	I0612 15:02:52.144528   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:52.144528   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:52.144756   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:52.145509   13752 node_ready.go:53] node "multinode-025000" has status "Ready":"False"
	I0612 15:02:52.638166   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:52.638166   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:52.638166   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:52.638166   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:52.642174   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:52.642216   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:52.642216   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:52.642216   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:52.642216   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:52 GMT
	I0612 15:02:52.642216   13752 round_trippers.go:580]     Audit-Id: 507d1e03-9e52-4dd7-b69d-8b4f7405f2ad
	I0612 15:02:52.642216   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:52.642216   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:52.642216   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:53.131499   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:53.131646   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:53.131646   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:53.131646   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:53.132789   13752 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 15:02:53.132789   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:53.132789   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:53 GMT
	I0612 15:02:53.132789   13752 round_trippers.go:580]     Audit-Id: 34737c82-f4ec-4d79-a4c7-6ea39c4ac9d0
	I0612 15:02:53.132789   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:53.136551   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:53.136551   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:53.136551   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:53.136661   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:53.630323   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:53.630323   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:53.630323   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:53.630323   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:53.631042   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:53.631042   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:53.631042   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:53.631042   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:53 GMT
	I0612 15:02:53.638687   13752 round_trippers.go:580]     Audit-Id: 2e0e1d38-01a3-477b-809b-a0188a92a062
	I0612 15:02:53.638687   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:53.638687   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:53.638687   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:53.638840   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:54.141746   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:54.141746   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:54.141746   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:54.142038   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:54.142315   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:54.142315   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:54.142315   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:54.142315   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:54.142315   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:54.142315   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:54.142315   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:54 GMT
	I0612 15:02:54.142315   13752 round_trippers.go:580]     Audit-Id: de69b983-c705-4010-ad85-301ab4e0aaea
	I0612 15:02:54.148042   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:54.148495   13752 node_ready.go:53] node "multinode-025000" has status "Ready":"False"
	I0612 15:02:54.639066   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:54.639066   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:54.639066   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:54.639066   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:54.639632   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:54.639632   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:54.639632   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:54.639632   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:54.639632   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:54.639632   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:54.642270   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:54 GMT
	I0612 15:02:54.642270   13752 round_trippers.go:580]     Audit-Id: b9d6ca66-7d48-4f48-bcd1-b8ecfd9b7d86
	I0612 15:02:54.642501   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:55.145356   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:55.145356   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:55.145356   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:55.145356   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:55.146606   13752 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 15:02:55.146606   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:55.146606   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:55 GMT
	I0612 15:02:55.146606   13752 round_trippers.go:580]     Audit-Id: e63b3525-57ba-4188-9b45-53c338b92e78
	I0612 15:02:55.149891   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:55.149891   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:55.149891   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:55.149891   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:55.150208   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:55.641073   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:55.641148   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:55.641148   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:55.641173   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:55.647972   13752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 15:02:55.647972   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:55.647972   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:55.647972   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:55 GMT
	I0612 15:02:55.647972   13752 round_trippers.go:580]     Audit-Id: cad13f2a-c5cb-480b-bae3-8323c4b4714c
	I0612 15:02:55.647972   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:55.647972   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:55.647972   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:55.649384   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:56.139188   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:56.139414   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:56.139414   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:56.139414   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:56.144250   13752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 15:02:56.144292   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:56.144376   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:56.144376   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:56 GMT
	I0612 15:02:56.144376   13752 round_trippers.go:580]     Audit-Id: 8a136706-01ca-40f4-ab91-162f3f44cfe1
	I0612 15:02:56.144411   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:56.144411   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:56.144411   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:56.144652   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:56.631947   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:56.631947   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:56.632022   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:56.632022   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:56.632819   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:56.632819   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:56.632819   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:56.632819   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:56.636461   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:56 GMT
	I0612 15:02:56.636461   13752 round_trippers.go:580]     Audit-Id: 91d64590-5e37-4085-8fbb-4d81bbef2ef6
	I0612 15:02:56.636461   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:56.636461   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:56.636759   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:56.637292   13752 node_ready.go:53] node "multinode-025000" has status "Ready":"False"
	I0612 15:02:57.142358   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:57.142595   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:57.142595   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:57.142595   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:57.143466   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:57.143466   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:57.143466   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:57 GMT
	I0612 15:02:57.148279   13752 round_trippers.go:580]     Audit-Id: 20fe3bb4-3608-43a8-817d-c6e2be21ad07
	I0612 15:02:57.148279   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:57.148279   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:57.148279   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:57.148403   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:57.148599   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:57.646718   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:57.646872   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:57.646872   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:57.646959   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:57.650358   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:02:57.650358   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:57.650358   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:57.650358   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:57.650358   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:57.650358   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:57 GMT
	I0612 15:02:57.650358   13752 round_trippers.go:580]     Audit-Id: 2c9b8312-977d-477e-8761-04fe49ea7782
	I0612 15:02:57.650358   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:57.650358   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:58.142949   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:58.142949   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:58.142949   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:58.142949   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:58.143572   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:58.147512   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:58.147512   13752 round_trippers.go:580]     Audit-Id: 686289a3-1b37-4337-b1df-4232076139e7
	I0612 15:02:58.147512   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:58.147646   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:58.147774   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:58.147774   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:58.147774   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:58 GMT
	I0612 15:02:58.147862   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:58.644345   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:58.644345   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:58.644457   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:58.644457   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:58.644829   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:58.644829   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:58.644829   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:58 GMT
	I0612 15:02:58.644829   13752 round_trippers.go:580]     Audit-Id: 195dd45e-17fd-458b-8a15-08496e7ab7d7
	I0612 15:02:58.644829   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:58.644829   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:58.644829   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:58.648338   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:58.648483   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:58.649096   13752 node_ready.go:53] node "multinode-025000" has status "Ready":"False"
	I0612 15:02:59.135237   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:59.135237   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:59.135237   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:59.135237   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:59.135673   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:02:59.139347   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:59.139434   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:59 GMT
	I0612 15:02:59.139434   13752 round_trippers.go:580]     Audit-Id: 3021bd87-cb23-4982-981e-1880cc6e7256
	I0612 15:02:59.139434   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:59.139434   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:59.139434   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:59.139434   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:59.139434   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:02:59.644639   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:02:59.644639   13752 round_trippers.go:469] Request Headers:
	I0612 15:02:59.644639   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:02:59.644639   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:02:59.648010   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:02:59.648244   13752 round_trippers.go:577] Response Headers:
	I0612 15:02:59.648244   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:02:59 GMT
	I0612 15:02:59.648244   13752 round_trippers.go:580]     Audit-Id: 976a5857-7f72-463a-833a-78a5cc6ae3d8
	I0612 15:02:59.648381   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:02:59.648381   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:02:59.648381   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:02:59.648381   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:02:59.648756   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:00.135449   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:00.135449   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:00.135449   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:00.135449   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:00.136003   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:00.140457   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:00.140457   13752 round_trippers.go:580]     Audit-Id: 11f3cb50-dd28-407e-a21e-cb93ae42961f
	I0612 15:03:00.140457   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:00.140457   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:00.140457   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:00.140457   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:00.140457   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:00 GMT
	I0612 15:03:00.140457   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:00.634722   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:00.634722   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:00.634722   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:00.634722   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:00.635365   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:00.635365   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:00.635365   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:00 GMT
	I0612 15:03:00.635365   13752 round_trippers.go:580]     Audit-Id: 0b91b955-f405-42b7-b790-f34b045553ec
	I0612 15:03:00.635365   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:00.635365   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:00.639493   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:00.639493   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:00.639909   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:01.143191   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:01.143417   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:01.143417   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:01.143417   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:01.143697   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:01.147527   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:01.147527   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:01 GMT
	I0612 15:03:01.147527   13752 round_trippers.go:580]     Audit-Id: d5040ae2-0164-4654-b24c-1ff69481062b
	I0612 15:03:01.147527   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:01.147527   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:01.147527   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:01.147611   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:01.147679   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:01.148368   13752 node_ready.go:53] node "multinode-025000" has status "Ready":"False"
	I0612 15:03:01.641872   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:01.642115   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:01.642115   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:01.642115   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:01.642440   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:01.642440   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:01.642440   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:01.642440   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:01 GMT
	I0612 15:03:01.642440   13752 round_trippers.go:580]     Audit-Id: ba3754fc-afe9-49a0-a96c-bbf267bf2a10
	I0612 15:03:01.642440   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:01.642440   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:01.642440   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:01.645720   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:02.129515   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:02.129515   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:02.129614   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:02.129614   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:02.130242   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:02.133528   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:02.133528   13752 round_trippers.go:580]     Audit-Id: 2d2e6495-76f7-4207-96a9-aa13cc893089
	I0612 15:03:02.133528   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:02.133528   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:02.133528   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:02.133528   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:02.133528   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:02 GMT
	I0612 15:03:02.133782   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:02.633527   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:02.633527   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:02.633778   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:02.633778   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:02.637700   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:02.637700   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:02.637700   13752 round_trippers.go:580]     Audit-Id: 45f84608-c6da-4334-ac7e-ddcc400b8087
	I0612 15:03:02.637700   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:02.637700   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:02.637700   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:02.637700   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:02.637700   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:02 GMT
	I0612 15:03:02.637995   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:03.132689   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:03.132689   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:03.132689   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:03.132689   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:03.133669   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:03.133669   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:03.136989   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:03 GMT
	I0612 15:03:03.136989   13752 round_trippers.go:580]     Audit-Id: 772a8546-5d13-47eb-bf6d-ed3ebd156e02
	I0612 15:03:03.136989   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:03.136989   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:03.136989   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:03.136989   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:03.137424   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:03.647711   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:03.647711   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:03.647711   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:03.647903   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:03.648504   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:03.648504   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:03.651570   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:03.651570   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:03.651570   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:03.651570   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:03.651570   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:03 GMT
	I0612 15:03:03.651570   13752 round_trippers.go:580]     Audit-Id: 4d2eb9e7-e070-451f-bc89-bb0ae6450467
	I0612 15:03:03.651678   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:03.652127   13752 node_ready.go:53] node "multinode-025000" has status "Ready":"False"
	I0612 15:03:04.141516   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:04.141647   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:04.141647   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:04.141647   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:04.148671   13752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 15:03:04.148671   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:04.148756   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:04.148756   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:04.148756   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:04.148756   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:04.148791   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:04 GMT
	I0612 15:03:04.148791   13752 round_trippers.go:580]     Audit-Id: e9e220eb-dbba-4ae7-b7cf-c873aeb24231
	I0612 15:03:04.148923   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:04.630964   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:04.631108   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:04.631108   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:04.631108   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:04.631390   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:04.631390   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:04.631390   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:04.635298   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:04.635298   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:04.635298   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:04 GMT
	I0612 15:03:04.635298   13752 round_trippers.go:580]     Audit-Id: c876ddc5-a24b-429a-92e2-a4ad20a7d83b
	I0612 15:03:04.635298   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:04.635678   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:05.140706   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:05.141249   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:05.141249   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:05.141249   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:05.141891   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:05.141891   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:05.141891   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:05.141891   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:05.141891   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:05.141891   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:05.141891   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:05 GMT
	I0612 15:03:05.141891   13752 round_trippers.go:580]     Audit-Id: e9b74051-8e77-438e-887e-2c05705a3f63
	I0612 15:03:05.146329   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:05.645551   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:05.645810   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:05.645810   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:05.645810   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:05.654583   13752 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0612 15:03:05.654583   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:05.654583   13752 round_trippers.go:580]     Audit-Id: f7197bd1-0f77-486d-b5cc-dbeced0b88be
	I0612 15:03:05.654583   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:05.654583   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:05.654583   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:05.654583   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:05.654583   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:05 GMT
	I0612 15:03:05.655139   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:05.655384   13752 node_ready.go:53] node "multinode-025000" has status "Ready":"False"
	I0612 15:03:06.135845   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:06.136081   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:06.136081   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:06.136081   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:06.136855   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:06.136855   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:06.140440   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:06.140440   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:06 GMT
	I0612 15:03:06.140440   13752 round_trippers.go:580]     Audit-Id: 1de4c03e-90fa-4dae-904c-fee95d24c0bf
	I0612 15:03:06.140440   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:06.140440   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:06.140440   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:06.140440   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:06.635670   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:06.635754   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:06.635754   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:06.635754   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:06.636503   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:06.636503   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:06.642930   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:06.642930   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:06.642930   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:06 GMT
	I0612 15:03:06.642930   13752 round_trippers.go:580]     Audit-Id: d7f8487c-4817-423b-8eae-ba1900959d38
	I0612 15:03:06.642930   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:06.642930   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:06.643248   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:07.137764   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:07.138048   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:07.138048   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:07.138048   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:07.138386   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:07.138386   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:07.138386   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:07 GMT
	I0612 15:03:07.138386   13752 round_trippers.go:580]     Audit-Id: 6c36857e-9227-4604-86f4-8655dfa27dda
	I0612 15:03:07.138386   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:07.138386   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:07.138386   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:07.138386   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:07.142146   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:07.646735   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:07.646841   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:07.646841   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:07.646841   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:07.647254   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:07.647254   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:07.647254   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:07.647254   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:07.647254   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:07 GMT
	I0612 15:03:07.647254   13752 round_trippers.go:580]     Audit-Id: 94e6a594-4dc6-4dfd-8190-34a2dea4e2e2
	I0612 15:03:07.647254   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:07.647254   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:07.651514   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:08.130727   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:08.130727   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:08.130727   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:08.130727   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:08.131272   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:08.131272   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:08.131272   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:08.131272   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:08.134600   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:08 GMT
	I0612 15:03:08.134600   13752 round_trippers.go:580]     Audit-Id: 12a6382e-fd3f-4872-b20a-513d3ad54caf
	I0612 15:03:08.134600   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:08.134600   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:08.134660   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:08.135545   13752 node_ready.go:53] node "multinode-025000" has status "Ready":"False"
	I0612 15:03:08.633307   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:08.633399   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:08.633399   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:08.633399   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:08.633711   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:08.637571   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:08.637571   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:08.637571   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:08.637571   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:08.637571   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:08.637571   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:08 GMT
	I0612 15:03:08.637571   13752 round_trippers.go:580]     Audit-Id: 12b95389-425c-4953-b8ba-d0bfdb2dc80e
	I0612 15:03:08.637929   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:09.137989   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:09.138245   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:09.138245   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:09.138245   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:09.138616   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:09.138616   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:09.138616   13752 round_trippers.go:580]     Audit-Id: 49773ac3-55d3-4a9e-9386-c93c496422c3
	I0612 15:03:09.138616   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:09.138616   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:09.138616   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:09.138616   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:09.138616   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:09 GMT
	I0612 15:03:09.143658   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:09.641031   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:09.641031   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:09.641217   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:09.641217   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:09.641605   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:09.641605   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:09.641605   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:09.641605   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:09 GMT
	I0612 15:03:09.641605   13752 round_trippers.go:580]     Audit-Id: ae098f0a-dca6-44e1-ae2b-ba2a8ba8b6d8
	I0612 15:03:09.641605   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:09.641605   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:09.641605   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:09.645198   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:10.130121   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:10.130328   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:10.130328   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:10.130328   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:10.137031   13752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 15:03:10.137031   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:10.137031   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:10.137031   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:10.137031   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:10 GMT
	I0612 15:03:10.137031   13752 round_trippers.go:580]     Audit-Id: e5623e7f-8f39-4ea7-9b2f-9c677bb51e0a
	I0612 15:03:10.137031   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:10.137031   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:10.137671   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:10.138356   13752 node_ready.go:53] node "multinode-025000" has status "Ready":"False"
	I0612 15:03:10.635994   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:10.636065   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:10.636065   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:10.636134   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:10.636429   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:10.639993   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:10.639993   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:10.639993   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:10.640071   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:10.640071   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:10 GMT
	I0612 15:03:10.640071   13752 round_trippers.go:580]     Audit-Id: b3bddf79-9ee0-493b-9360-98d8b1173aca
	I0612 15:03:10.640071   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:10.640109   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:11.141820   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:11.141820   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:11.141820   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:11.142167   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:11.142800   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:11.147487   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:11.147487   13752 round_trippers.go:580]     Audit-Id: c97af60e-dc32-48da-90ff-43d0f4196364
	I0612 15:03:11.147487   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:11.147487   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:11.147487   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:11.147487   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:11.147546   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:11 GMT
	I0612 15:03:11.147648   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:11.645558   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:11.645754   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:11.645754   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:11.645754   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:11.651733   13752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 15:03:11.651733   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:11.651733   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:11.651786   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:11.651786   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:11.651786   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:11.651809   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:11 GMT
	I0612 15:03:11.651809   13752 round_trippers.go:580]     Audit-Id: d4fd169c-fd3c-4974-a7b4-a94bcf0f43f8
	I0612 15:03:11.651838   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1889","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0612 15:03:12.139831   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:12.139905   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:12.139905   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:12.139905   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:12.143452   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:03:12.143452   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:12.143452   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:12.143452   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:12.143452   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:12 GMT
	I0612 15:03:12.143452   13752 round_trippers.go:580]     Audit-Id: 5608a3eb-4686-445b-b409-8c5557525254
	I0612 15:03:12.143452   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:12.143452   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:12.143452   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1935","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0612 15:03:12.144281   13752 node_ready.go:49] node "multinode-025000" has status "Ready":"True"
	I0612 15:03:12.144343   13752 node_ready.go:38] duration metric: took 36.0155064s for node "multinode-025000" to be "Ready" ...
	I0612 15:03:12.144343   13752 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 15:03:12.144469   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods
	I0612 15:03:12.144469   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:12.144537   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:12.144537   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:12.149807   13752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 15:03:12.149807   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:12.149807   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:12.149807   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:12 GMT
	I0612 15:03:12.149807   13752 round_trippers.go:580]     Audit-Id: 64d33b6c-23e7-45bf-841d-88c1965795e7
	I0612 15:03:12.149807   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:12.149807   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:12.149807   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:12.151702   13752 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1936"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86624 chars]
	I0612 15:03:12.156052   13752 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace to be "Ready" ...
	I0612 15:03:12.156052   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:12.156052   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:12.156052   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:12.156052   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:12.156754   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:12.156754   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:12.156754   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:12.156754   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:12 GMT
	I0612 15:03:12.156754   13752 round_trippers.go:580]     Audit-Id: f8be8a83-c4fe-4a11-b0e7-b147af69d3ca
	I0612 15:03:12.156754   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:12.156754   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:12.156754   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:12.159990   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:12.160726   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:12.160726   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:12.160726   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:12.160790   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:12.161565   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:12.161565   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:12.161565   13752 round_trippers.go:580]     Audit-Id: 06eff638-34b1-4a22-88e3-b285e1cc1b1b
	I0612 15:03:12.161565   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:12.161565   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:12.161565   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:12.161565   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:12.161565   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:12 GMT
	I0612 15:03:12.161565   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1935","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0612 15:03:12.664422   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:12.664422   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:12.664422   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:12.664422   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:12.664978   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:12.668841   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:12.668841   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:12 GMT
	I0612 15:03:12.668841   13752 round_trippers.go:580]     Audit-Id: 11837902-b391-4790-abbe-09aec8047599
	I0612 15:03:12.668841   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:12.668841   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:12.668841   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:12.668841   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:12.668841   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:12.669683   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:12.669683   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:12.669683   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:12.669683   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:12.673477   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:03:12.673538   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:12.673538   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:12.673538   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:12.673538   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:12.673615   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:12 GMT
	I0612 15:03:12.673615   13752 round_trippers.go:580]     Audit-Id: 3e1aae8e-ac8f-4dde-902f-5521941b4889
	I0612 15:03:12.673615   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:12.673851   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1935","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0612 15:03:13.159778   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:13.159880   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:13.159880   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:13.159880   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:13.160296   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:13.160296   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:13.164797   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:13.164797   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:13.164797   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:13.164797   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:13 GMT
	I0612 15:03:13.164797   13752 round_trippers.go:580]     Audit-Id: 76e9580d-960e-483f-aeb1-8cc53ead643d
	I0612 15:03:13.164797   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:13.165382   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:13.166173   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:13.166222   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:13.166222   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:13.166222   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:13.166837   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:13.166837   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:13.166837   13752 round_trippers.go:580]     Audit-Id: c8d4b69b-f9e5-486f-9f7a-9bff81a0e1a6
	I0612 15:03:13.166837   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:13.166837   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:13.170925   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:13.170925   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:13.170925   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:13 GMT
	I0612 15:03:13.171378   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1935","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0612 15:03:13.666808   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:13.666877   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:13.666910   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:13.666910   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:13.667751   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:13.667751   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:13.667751   13752 round_trippers.go:580]     Audit-Id: c24e6de7-d0fa-4172-826b-4ffd3b6b1188
	I0612 15:03:13.667751   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:13.667751   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:13.667751   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:13.667751   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:13.667751   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:13 GMT
	I0612 15:03:13.671558   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:13.672541   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:13.672541   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:13.672541   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:13.672541   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:13.674416   13752 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 15:03:13.674416   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:13.676509   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:13.676509   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:13.676509   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:13.676877   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:13 GMT
	I0612 15:03:13.676977   13752 round_trippers.go:580]     Audit-Id: 95ba785b-a993-476e-845f-20f496642aa5
	I0612 15:03:13.676977   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:13.677337   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1935","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0612 15:03:14.171254   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:14.171318   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:14.171318   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:14.171318   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:14.171676   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:14.171676   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:14.171676   13752 round_trippers.go:580]     Audit-Id: 1e0b47c8-e502-472f-bd00-03832c49d99a
	I0612 15:03:14.171676   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:14.171676   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:14.171676   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:14.174814   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:14.174814   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:14 GMT
	I0612 15:03:14.175082   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:14.175984   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:14.176073   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:14.176073   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:14.176073   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:14.187090   13752 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0612 15:03:14.188690   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:14.188690   13752 round_trippers.go:580]     Audit-Id: 65486f55-678d-4969-b816-e5fd9f9ee245
	I0612 15:03:14.188690   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:14.188690   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:14.188690   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:14.188690   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:14.188690   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:14 GMT
	I0612 15:03:14.189172   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:14.189377   13752 pod_ready.go:102] pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace has status "Ready":"False"
	I0612 15:03:14.666257   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:14.666257   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:14.666257   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:14.666257   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:14.671209   13752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 15:03:14.671209   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:14.671209   13752 round_trippers.go:580]     Audit-Id: e4f38913-e3cf-46e6-b6d6-7fa966c9f863
	I0612 15:03:14.671209   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:14.671209   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:14.671209   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:14.671209   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:14.671209   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:14 GMT
	I0612 15:03:14.671209   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:14.672518   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:14.672518   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:14.672698   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:14.672698   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:14.672829   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:14.672829   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:14.675774   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:14.675774   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:14.675774   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:14.675774   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:14.675774   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:14 GMT
	I0612 15:03:14.675774   13752 round_trippers.go:580]     Audit-Id: 89da0cc2-d633-4620-843c-bc41adf0c7f2
	I0612 15:03:14.676153   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:15.162487   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:15.162487   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:15.162487   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:15.162487   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:15.163038   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:15.166486   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:15.166486   13752 round_trippers.go:580]     Audit-Id: b8296239-7a61-4367-a957-9347881e7348
	I0612 15:03:15.166486   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:15.166486   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:15.166486   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:15.166486   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:15.166486   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:15 GMT
	I0612 15:03:15.167057   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:15.167641   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:15.167641   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:15.167641   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:15.167641   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:15.168524   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:15.170924   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:15.170924   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:15.170924   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:15.171008   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:15 GMT
	I0612 15:03:15.171008   13752 round_trippers.go:580]     Audit-Id: 4bcac12e-945c-4901-ae4d-36f310b49853
	I0612 15:03:15.171081   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:15.171081   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:15.171399   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:15.659773   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:15.659856   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:15.659856   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:15.659856   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:15.662234   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:03:15.662234   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:15.662234   13752 round_trippers.go:580]     Audit-Id: dfe7942a-7d12-465d-9c14-47e97ecc8463
	I0612 15:03:15.662234   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:15.662234   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:15.662234   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:15.662234   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:15.662234   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:15 GMT
	I0612 15:03:15.662234   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:15.664704   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:15.664771   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:15.664771   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:15.664771   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:15.665050   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:15.667362   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:15.667362   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:15.667362   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:15 GMT
	I0612 15:03:15.667362   13752 round_trippers.go:580]     Audit-Id: 0521ac5c-8d08-411f-884f-92d9706f440c
	I0612 15:03:15.667362   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:15.667362   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:15.667438   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:15.667849   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:16.163461   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:16.163758   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:16.163758   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:16.163758   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:16.164108   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:16.164108   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:16.164108   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:16.164108   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:16.167888   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:16.167888   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:16.167888   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:16 GMT
	I0612 15:03:16.167888   13752 round_trippers.go:580]     Audit-Id: c39c8311-e347-47bd-9285-c2e7e1cc29ba
	I0612 15:03:16.168103   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:16.169046   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:16.169112   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:16.169112   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:16.169112   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:16.169392   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:16.172427   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:16.172427   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:16 GMT
	I0612 15:03:16.172427   13752 round_trippers.go:580]     Audit-Id: 2a7b428e-24c6-4724-b190-cde9bdceec6a
	I0612 15:03:16.172427   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:16.172427   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:16.172427   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:16.172427   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:16.173810   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:16.670654   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:16.670654   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:16.670926   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:16.670926   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:16.678184   13752 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 15:03:16.678184   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:16.678184   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:16.678184   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:16.678184   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:16.678184   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:16 GMT
	I0612 15:03:16.678184   13752 round_trippers.go:580]     Audit-Id: 65c385b4-9a36-4448-bb81-66c3a1819fe8
	I0612 15:03:16.678184   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:16.678818   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:16.679532   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:16.679532   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:16.679532   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:16.679532   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:16.683291   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:03:16.683349   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:16.683349   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:16.683410   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:16.683410   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:16 GMT
	I0612 15:03:16.683410   13752 round_trippers.go:580]     Audit-Id: 939f7a8d-5c5b-4871-8220-ddddeb67fa1e
	I0612 15:03:16.683467   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:16.683467   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:16.683931   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:16.684300   13752 pod_ready.go:102] pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace has status "Ready":"False"
	I0612 15:03:17.163762   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:17.163892   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:17.163892   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:17.163892   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:17.167949   13752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 15:03:17.167949   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:17.167949   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:17.167949   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:17.167949   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:17.167949   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:17.167949   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:17 GMT
	I0612 15:03:17.167949   13752 round_trippers.go:580]     Audit-Id: 88629e13-4237-4c44-bd1f-a55d4962dd32
	I0612 15:03:17.167949   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:17.169057   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:17.169057   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:17.169057   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:17.169132   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:17.171964   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:03:17.171964   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:17.171964   13752 round_trippers.go:580]     Audit-Id: 1be9e3ce-e218-43a6-9f31-17694a81e20e
	I0612 15:03:17.171964   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:17.171964   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:17.171964   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:17.171964   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:17.171964   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:17 GMT
	I0612 15:03:17.171964   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:17.670219   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:17.670302   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:17.670302   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:17.670459   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:17.670700   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:17.670700   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:17.670700   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:17.670700   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:17 GMT
	I0612 15:03:17.670700   13752 round_trippers.go:580]     Audit-Id: 7d0e7771-3c23-484b-b212-7ff0e24def33
	I0612 15:03:17.670700   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:17.670700   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:17.674406   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:17.675249   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:17.677336   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:17.677336   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:17.677422   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:17.677422   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:17.684537   13752 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 15:03:17.684537   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:17.684537   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:17.684537   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:17.684537   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:17.684537   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:17.684537   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:17 GMT
	I0612 15:03:17.684537   13752 round_trippers.go:580]     Audit-Id: 477195ac-c4fe-41d8-9b86-b449755e984f
	I0612 15:03:17.685585   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:18.165180   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:18.165276   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:18.165276   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:18.165276   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:18.169231   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:18.169231   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:18.169231   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:18.169346   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:18.169346   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:18.169346   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:18.169346   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:18 GMT
	I0612 15:03:18.169346   13752 round_trippers.go:580]     Audit-Id: 311e0ce8-5836-4c61-bcaf-c9ef5d2897ad
	I0612 15:03:18.169582   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:18.170292   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:18.170292   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:18.170292   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:18.170292   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:18.170855   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:18.173907   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:18.173907   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:18.173907   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:18.173907   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:18.173907   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:18.173907   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:18 GMT
	I0612 15:03:18.173907   13752 round_trippers.go:580]     Audit-Id: d5fdca7b-f0a2-4796-8709-f4bb0506b8d8
	I0612 15:03:18.174455   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:18.671584   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:18.671675   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:18.671675   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:18.671675   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:18.671930   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:18.671930   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:18.671930   13752 round_trippers.go:580]     Audit-Id: 158be6d4-5c48-4cd9-90bb-1259dff2d35f
	I0612 15:03:18.671930   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:18.671930   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:18.671930   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:18.671930   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:18.671930   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:18 GMT
	I0612 15:03:18.676154   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:18.677304   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:18.677360   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:18.677360   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:18.677360   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:18.677554   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:18.677554   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:18.677554   13752 round_trippers.go:580]     Audit-Id: 1a86bd1c-1e8e-4e78-a854-fcc0e8788c07
	I0612 15:03:18.677554   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:18.677554   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:18.680339   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:18.680339   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:18.680339   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:18 GMT
	I0612 15:03:18.680696   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:19.158044   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:19.158044   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:19.158044   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:19.158044   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:19.162516   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:19.162516   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:19.162516   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:19.162516   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:19.162516   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:19.162516   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:19 GMT
	I0612 15:03:19.162595   13752 round_trippers.go:580]     Audit-Id: 88356845-5e18-4403-ac28-c741178182a7
	I0612 15:03:19.162595   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:19.162706   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:19.163676   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:19.163676   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:19.163676   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:19.163676   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:19.164026   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:19.164026   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:19.164026   13752 round_trippers.go:580]     Audit-Id: 539fb5bd-84d8-4eb3-9acd-9cc34f4b056c
	I0612 15:03:19.164026   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:19.166992   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:19.166992   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:19.166992   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:19.166992   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:19 GMT
	I0612 15:03:19.167302   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:19.167332   13752 pod_ready.go:102] pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace has status "Ready":"False"
	I0612 15:03:19.667212   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:19.667212   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:19.667212   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:19.667212   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:19.667738   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:19.672267   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:19.672267   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:19 GMT
	I0612 15:03:19.672267   13752 round_trippers.go:580]     Audit-Id: 68701686-d4e9-45fa-ac30-7f2986401a96
	I0612 15:03:19.672267   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:19.672267   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:19.672267   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:19.672267   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:19.672504   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:19.673560   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:19.673560   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:19.673560   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:19.673560   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:19.674088   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:19.674088   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:19.677192   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:19.677192   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:19.677192   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:19 GMT
	I0612 15:03:19.677192   13752 round_trippers.go:580]     Audit-Id: 240f3bb9-d77e-4d6a-9696-36726d94d774
	I0612 15:03:19.677192   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:19.677192   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:19.677569   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:20.163409   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:20.163443   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:20.163443   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:20.163584   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:20.167553   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:03:20.168873   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:20.168932   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:20.168932   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:20.168932   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:20.168932   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:20.168932   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:20 GMT
	I0612 15:03:20.168932   13752 round_trippers.go:580]     Audit-Id: c4610272-61b4-42b0-93c7-b2a384060bf1
	I0612 15:03:20.169132   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:20.169768   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:20.169768   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:20.169768   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:20.169768   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:20.172036   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:03:20.172036   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:20.172036   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:20 GMT
	I0612 15:03:20.173278   13752 round_trippers.go:580]     Audit-Id: a47444cc-a5df-43b8-8871-4689e735a750
	I0612 15:03:20.173278   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:20.173278   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:20.173278   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:20.173278   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:20.173667   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:20.668594   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:20.668668   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:20.668668   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:20.668668   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:20.669512   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:20.669512   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:20.672966   13752 round_trippers.go:580]     Audit-Id: c35f4e14-79e2-4bcb-aa08-c972ebfa5829
	I0612 15:03:20.672966   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:20.672966   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:20.672966   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:20.672966   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:20.672966   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:20 GMT
	I0612 15:03:20.673190   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:20.673916   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:20.673916   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:20.673988   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:20.673988   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:20.674237   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:20.677211   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:20.677211   13752 round_trippers.go:580]     Audit-Id: 639e7809-8def-448d-9647-4504d3e489c0
	I0612 15:03:20.677211   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:20.677211   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:20.677211   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:20.677211   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:20.677211   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:20 GMT
	I0612 15:03:20.677504   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:21.163316   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:21.163316   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:21.163316   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:21.163577   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:21.163841   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:21.168377   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:21.168377   13752 round_trippers.go:580]     Audit-Id: 25d6f705-42cf-41a5-8dad-6a7e1444683a
	I0612 15:03:21.168377   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:21.168377   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:21.168453   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:21.168453   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:21.168453   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:21 GMT
	I0612 15:03:21.168678   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:21.169479   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:21.169479   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:21.169479   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:21.169479   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:21.169818   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:21.172190   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:21.172190   13752 round_trippers.go:580]     Audit-Id: cc8ca485-df90-483a-892c-7f62f30ae7ae
	I0612 15:03:21.172286   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:21.172286   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:21.172286   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:21.172286   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:21.172286   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:21 GMT
	I0612 15:03:21.172589   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:21.173340   13752 pod_ready.go:102] pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace has status "Ready":"False"
	I0612 15:03:21.656376   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:21.656665   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:21.656665   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:21.656665   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:21.657043   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:21.657043   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:21.657043   13752 round_trippers.go:580]     Audit-Id: 33236351-ac47-40af-af33-b76163628b9c
	I0612 15:03:21.657043   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:21.657043   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:21.657043   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:21.657043   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:21.660859   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:21 GMT
	I0612 15:03:21.660859   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:21.661995   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:21.662106   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:21.662106   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:21.662106   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:21.662283   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:21.665127   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:21.665127   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:21.665127   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:21.665193   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:21 GMT
	I0612 15:03:21.665193   13752 round_trippers.go:580]     Audit-Id: ebb79d47-d698-4304-9769-1dff97ae62b1
	I0612 15:03:21.665193   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:21.665193   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:21.665829   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:22.156792   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:22.156792   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:22.156792   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:22.156792   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:22.162689   13752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 15:03:22.162689   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:22.162689   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:22.162689   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:22.162689   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:22.162689   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:22 GMT
	I0612 15:03:22.162689   13752 round_trippers.go:580]     Audit-Id: 43dd6a54-f263-4c90-b38b-2a8fb59c2e9c
	I0612 15:03:22.162689   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:22.163338   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:22.164160   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:22.164160   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:22.164160   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:22.164160   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:22.166089   13752 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 15:03:22.166089   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:22.166089   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:22 GMT
	I0612 15:03:22.166089   13752 round_trippers.go:580]     Audit-Id: 62c9b18d-4347-4801-b260-96182709d048
	I0612 15:03:22.166089   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:22.166089   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:22.166089   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:22.166089   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:22.167723   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:22.671281   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:22.671510   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:22.671585   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:22.671585   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:22.675359   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:22.675416   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:22.675416   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:22.675416   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:22.675416   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:22.675416   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:22.675416   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:22 GMT
	I0612 15:03:22.675416   13752 round_trippers.go:580]     Audit-Id: 748f48c6-cee3-4999-a6d3-438c31138736
	I0612 15:03:22.675416   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:22.676152   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:22.676152   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:22.676152   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:22.676675   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:22.679533   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:03:22.679611   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:22.679611   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:22 GMT
	I0612 15:03:22.679611   13752 round_trippers.go:580]     Audit-Id: 4e4a5eb6-456e-4e7c-844f-e55fe98143fb
	I0612 15:03:22.679611   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:22.679611   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:22.679611   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:22.679611   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:22.679611   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:23.160365   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:23.160365   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:23.160365   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:23.160365   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:23.165044   13752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 15:03:23.165044   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:23.165044   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:23.165044   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:23.165044   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:23.165044   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:23.165044   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:23 GMT
	I0612 15:03:23.165044   13752 round_trippers.go:580]     Audit-Id: 732a59c7-9424-4eab-8c79-b22399b0e8f6
	I0612 15:03:23.165044   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:23.166337   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:23.166337   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:23.166437   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:23.166437   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:23.167065   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:23.167065   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:23.167065   13752 round_trippers.go:580]     Audit-Id: 3b35709e-90eb-429f-bd7b-dbe2f80ba5d5
	I0612 15:03:23.167065   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:23.167065   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:23.167065   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:23.167065   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:23.167065   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:23 GMT
	I0612 15:03:23.169795   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:23.664928   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:23.665186   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:23.665186   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:23.665186   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:23.668923   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:03:23.668923   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:23.668923   13752 round_trippers.go:580]     Audit-Id: 463fe866-f47a-40de-8951-fd8a90a654c2
	I0612 15:03:23.668923   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:23.668923   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:23.668923   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:23.668923   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:23.668923   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:23 GMT
	I0612 15:03:23.669206   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:23.670189   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:23.670220   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:23.670258   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:23.670258   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:23.676511   13752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 15:03:23.676511   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:23.676511   13752 round_trippers.go:580]     Audit-Id: f487fc2f-492d-44be-9469-ccb242c07bac
	I0612 15:03:23.676511   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:23.676511   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:23.676511   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:23.676511   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:23.676511   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:23 GMT
	I0612 15:03:23.676511   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:23.677254   13752 pod_ready.go:102] pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace has status "Ready":"False"
	I0612 15:03:24.161068   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:24.161068   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:24.161068   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:24.161068   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:24.161655   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:24.165566   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:24.165566   13752 round_trippers.go:580]     Audit-Id: 93499f6d-c31d-436b-a96a-cecf7fb494c8
	I0612 15:03:24.165566   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:24.165566   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:24.165566   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:24.165656   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:24.165656   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:24 GMT
	I0612 15:03:24.165974   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:24.166600   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:24.166600   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:24.166600   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:24.166600   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:24.167370   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:24.170282   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:24.170282   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:24.170282   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:24.170282   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:24.170282   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:24.170282   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:24 GMT
	I0612 15:03:24.170282   13752 round_trippers.go:580]     Audit-Id: f6c0cf04-16ea-466e-b6de-6104ea128202
	I0612 15:03:24.170651   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:24.668066   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:24.668162   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:24.668162   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:24.668221   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:24.672157   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:03:24.672157   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:24.672157   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:24.672157   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:24.672157   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:24.672157   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:24.672157   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:24 GMT
	I0612 15:03:24.672157   13752 round_trippers.go:580]     Audit-Id: b86a623b-9e8c-4f96-925d-6204ba1ff4f7
	I0612 15:03:24.672157   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:24.673297   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:24.673297   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:24.673297   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:24.673369   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:24.673530   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:24.673530   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:24.673530   13752 round_trippers.go:580]     Audit-Id: 87751d8e-bc3e-485e-8b93-124c084c39ad
	I0612 15:03:24.673530   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:24.673530   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:24.673530   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:24.673530   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:24.673530   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:24 GMT
	I0612 15:03:24.677113   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:25.171354   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:25.171491   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:25.171491   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:25.171491   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:25.172211   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:25.172211   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:25.175615   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:25.175615   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:25 GMT
	I0612 15:03:25.175615   13752 round_trippers.go:580]     Audit-Id: 11c9e43c-6ccb-4be8-bc41-4303d1dc378d
	I0612 15:03:25.175615   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:25.175615   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:25.175615   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:25.176058   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:25.177017   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:25.177087   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:25.177087   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:25.177087   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:25.177320   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:25.180616   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:25.180616   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:25 GMT
	I0612 15:03:25.180616   13752 round_trippers.go:580]     Audit-Id: 940139de-b51c-4be3-9dcd-f2c5ce3b2fa9
	I0612 15:03:25.180616   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:25.180616   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:25.180616   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:25.180616   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:25.180912   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:25.666958   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:25.667124   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:25.667124   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:25.667124   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:25.670726   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:03:25.670726   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:25.670726   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:25.670726   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:25.670726   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:25.670726   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:25 GMT
	I0612 15:03:25.670726   13752 round_trippers.go:580]     Audit-Id: 09ec9c74-030b-49b4-a669-801c14f9202e
	I0612 15:03:25.670726   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:25.671556   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:25.672302   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:25.672302   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:25.672302   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:25.672302   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:25.672549   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:25.672549   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:25.674829   13752 round_trippers.go:580]     Audit-Id: 3800fa11-bd60-4376-a2b4-e0430795f986
	I0612 15:03:25.674829   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:25.674829   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:25.674829   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:25.674829   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:25.674829   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:25 GMT
	I0612 15:03:25.675231   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:26.157801   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:26.157892   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:26.157892   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:26.157892   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:26.158464   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:26.162372   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:26.162372   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:26 GMT
	I0612 15:03:26.162372   13752 round_trippers.go:580]     Audit-Id: fc04c4f1-41f7-4578-bbf7-88e5f1c798e3
	I0612 15:03:26.162372   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:26.162372   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:26.162372   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:26.162372   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:26.162850   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:26.163585   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:26.163660   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:26.163660   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:26.163660   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:26.165620   13752 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 15:03:26.165620   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:26.165620   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:26.167458   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:26.167458   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:26.167458   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:26 GMT
	I0612 15:03:26.167458   13752 round_trippers.go:580]     Audit-Id: f7a9c7c3-3887-4efa-8b9d-901a493b6a0c
	I0612 15:03:26.167458   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:26.167653   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:26.168173   13752 pod_ready.go:102] pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace has status "Ready":"False"
	I0612 15:03:26.668756   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:26.669005   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:26.669005   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:26.669005   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:26.669383   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:26.673035   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:26.673035   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:26.673035   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:26.673035   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:26 GMT
	I0612 15:03:26.673035   13752 round_trippers.go:580]     Audit-Id: bcd2a625-e4bb-4ace-b624-ac972672ce5d
	I0612 15:03:26.673035   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:26.673035   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:26.673421   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:26.674370   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:26.674370   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:26.674370   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:26.674370   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:26.674707   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:26.674707   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:26.674707   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:26 GMT
	I0612 15:03:26.674707   13752 round_trippers.go:580]     Audit-Id: c3840da4-f77d-40d0-ab64-4cecc86047d7
	I0612 15:03:26.674707   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:26.674707   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:26.674707   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:26.674707   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:26.677275   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:27.168209   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:27.168344   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:27.168344   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:27.168454   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:27.169191   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:27.173151   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:27.173151   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:27 GMT
	I0612 15:03:27.173151   13752 round_trippers.go:580]     Audit-Id: 49d9ec69-5ee7-48f6-997d-64f4c3aff8d7
	I0612 15:03:27.173151   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:27.173151   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:27.173151   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:27.173151   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:27.173413   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:27.174230   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:27.174345   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:27.174345   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:27.174345   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:27.177130   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:03:27.177130   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:27.177130   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:27.177130   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:27.178071   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:27.178071   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:27.178071   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:27 GMT
	I0612 15:03:27.178071   13752 round_trippers.go:580]     Audit-Id: b13d1e26-627f-49c0-b39e-2f7f8b3e9e4b
	I0612 15:03:27.178388   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:27.660792   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:27.660792   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:27.660792   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:27.660792   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:27.661489   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:27.664871   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:27.664871   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:27.664871   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:27.664871   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:27.664871   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:27 GMT
	I0612 15:03:27.665144   13752 round_trippers.go:580]     Audit-Id: bc533479-0f16-4d7b-808d-0e351b817555
	I0612 15:03:27.665144   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:27.665281   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:27.666461   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:27.666638   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:27.666638   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:27.666638   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:27.671788   13752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 15:03:27.671788   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:27.671788   13752 round_trippers.go:580]     Audit-Id: 4c621081-2e41-4e0d-95f7-4a473694b82b
	I0612 15:03:27.671788   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:27.671788   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:27.671788   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:27.671788   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:27.671788   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:27 GMT
	I0612 15:03:27.671788   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:28.168931   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:28.169047   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:28.169129   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:28.169129   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:28.169865   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:28.169865   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:28.169865   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:28.169865   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:28.169865   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:28 GMT
	I0612 15:03:28.169865   13752 round_trippers.go:580]     Audit-Id: faa7f381-8508-41db-9697-7f19237c56c5
	I0612 15:03:28.169865   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:28.169865   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:28.174441   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:28.175301   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:28.175371   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:28.175371   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:28.175371   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:28.175606   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:28.175606   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:28.175606   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:28.175606   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:28.178802   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:28 GMT
	I0612 15:03:28.178802   13752 round_trippers.go:580]     Audit-Id: 4c06d460-a07e-4aa8-baf3-fc9841843b78
	I0612 15:03:28.178802   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:28.178802   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:28.178802   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:28.179417   13752 pod_ready.go:102] pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace has status "Ready":"False"
	I0612 15:03:28.671056   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:28.671056   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:28.671327   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:28.671327   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:28.671596   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:28.671596   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:28.675681   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:28.675681   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:28 GMT
	I0612 15:03:28.675681   13752 round_trippers.go:580]     Audit-Id: a2d6bf8c-d54f-40f6-b8bd-61f69cc97cf4
	I0612 15:03:28.675681   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:28.675681   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:28.675681   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:28.675759   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:28.676677   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:28.676677   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:28.676769   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:28.676769   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:28.676977   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:28.676977   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:28.676977   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:28.676977   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:28.676977   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:28.676977   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:28.680132   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:28 GMT
	I0612 15:03:28.680132   13752 round_trippers.go:580]     Audit-Id: d34b468f-28d3-422b-b6fd-56a813b2aa38
	I0612 15:03:28.680228   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:29.167076   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:29.167076   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:29.167076   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:29.167076   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:29.171779   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:29.171839   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:29.171839   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:29 GMT
	I0612 15:03:29.171839   13752 round_trippers.go:580]     Audit-Id: ac5fbbcf-02ad-44c9-9f3b-e393757b25da
	I0612 15:03:29.171839   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:29.171839   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:29.171839   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:29.171839   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:29.171839   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:29.172626   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:29.173167   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:29.173167   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:29.173167   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:29.176560   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:29.176628   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:29.176628   13752 round_trippers.go:580]     Audit-Id: d770ba98-8506-4a7c-aeea-e0dc5c1c146e
	I0612 15:03:29.176628   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:29.176747   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:29.176747   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:29.176747   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:29.176747   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:29 GMT
	I0612 15:03:29.176747   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:29.667736   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:29.668079   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:29.668079   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:29.668079   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:29.672389   13752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 15:03:29.672389   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:29.672389   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:29.672389   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:29.672389   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:29.672389   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:29 GMT
	I0612 15:03:29.672389   13752 round_trippers.go:580]     Audit-Id: e0e16183-9ca2-424a-b002-96d6c958f2e6
	I0612 15:03:29.672389   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:29.672389   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:29.673549   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:29.673549   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:29.673549   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:29.673549   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:29.674393   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:29.676452   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:29.676452   13752 round_trippers.go:580]     Audit-Id: 23e7d650-7230-4392-981e-c3020661e263
	I0612 15:03:29.676452   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:29.676452   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:29.676527   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:29.676527   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:29.676527   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:29 GMT
	I0612 15:03:29.676769   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:30.166822   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:30.167144   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:30.167144   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:30.167144   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:30.167543   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:30.170838   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:30.170838   13752 round_trippers.go:580]     Audit-Id: 694b4929-d08c-4859-914a-d3243f0eccd8
	I0612 15:03:30.170838   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:30.170838   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:30.170838   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:30.170838   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:30.170838   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:30 GMT
	I0612 15:03:30.171098   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:30.171997   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:30.171997   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:30.171997   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:30.172093   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:30.172342   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:30.175285   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:30.175285   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:30.175285   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:30.175285   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:30.175285   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:30 GMT
	I0612 15:03:30.175285   13752 round_trippers.go:580]     Audit-Id: f81f2098-4ee9-4bef-8315-d50f827a543a
	I0612 15:03:30.175285   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:30.175567   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:30.664442   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:30.664442   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:30.664541   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:30.664541   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:30.664767   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:30.668873   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:30.668873   13752 round_trippers.go:580]     Audit-Id: b413e64c-e534-41f0-a35f-a9b0ea692654
	I0612 15:03:30.668873   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:30.668873   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:30.668873   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:30.668873   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:30.668873   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:30 GMT
	I0612 15:03:30.669620   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:30.670397   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:30.670397   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:30.670397   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:30.670397   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:30.676666   13752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 15:03:30.676666   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:30.676666   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:30 GMT
	I0612 15:03:30.676666   13752 round_trippers.go:580]     Audit-Id: de26cfac-4cc3-492f-b156-47f572324349
	I0612 15:03:30.676666   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:30.676666   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:30.676666   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:30.676666   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:30.676666   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:30.677604   13752 pod_ready.go:102] pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace has status "Ready":"False"
	I0612 15:03:31.160016   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:31.160016   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:31.160016   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:31.160016   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:31.160554   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:31.164418   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:31.164418   13752 round_trippers.go:580]     Audit-Id: e715e0cf-29b7-4377-9cff-ae02dd5bde9c
	I0612 15:03:31.164418   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:31.164418   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:31.164418   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:31.164418   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:31.164418   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:31 GMT
	I0612 15:03:31.164418   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:31.165562   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:31.165657   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:31.165657   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:31.165657   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:31.166475   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:31.166475   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:31.166475   13752 round_trippers.go:580]     Audit-Id: d7a61c21-4bd0-4823-a58d-b23911294f55
	I0612 15:03:31.166475   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:31.166475   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:31.166475   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:31.166475   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:31.168620   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:31 GMT
	I0612 15:03:31.169052   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:31.661261   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:31.661261   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:31.661261   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:31.661261   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:31.666232   13752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 15:03:31.666232   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:31.666232   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:31.666232   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:31.666232   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:31.666232   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:31.666232   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:31 GMT
	I0612 15:03:31.666232   13752 round_trippers.go:580]     Audit-Id: 110ac12a-7d7d-444f-a8af-051c7e5b2bb5
	I0612 15:03:31.666232   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:31.667307   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:31.667380   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:31.667380   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:31.667380   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:31.667605   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:31.667605   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:31.667605   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:31.667605   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:31.671283   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:31.671283   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:31.671283   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:31 GMT
	I0612 15:03:31.671283   13752 round_trippers.go:580]     Audit-Id: f3593e29-483a-403a-99e6-aa0d08ce3460
	I0612 15:03:31.671578   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:32.162517   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:32.162517   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:32.162517   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:32.162517   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:32.169807   13752 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 15:03:32.169807   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:32.169807   13752 round_trippers.go:580]     Audit-Id: 5a76526a-f78e-4dca-b93d-372476ca3459
	I0612 15:03:32.169807   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:32.169807   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:32.169807   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:32.169807   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:32.169807   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:32 GMT
	I0612 15:03:32.169807   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:32.170520   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:32.170520   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:32.170520   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:32.170520   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:32.174112   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:03:32.174112   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:32.174112   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:32.174112   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:32 GMT
	I0612 15:03:32.174112   13752 round_trippers.go:580]     Audit-Id: af3ee3c8-921a-4739-a920-a87e2a810232
	I0612 15:03:32.174112   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:32.174112   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:32.174112   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:32.175259   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:32.667728   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:32.667728   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:32.667728   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:32.667728   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:32.668280   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:32.672371   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:32.672371   13752 round_trippers.go:580]     Audit-Id: c6f0b530-4a18-4e17-ab4d-b154d65a5c76
	I0612 15:03:32.672371   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:32.672371   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:32.672371   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:32.672371   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:32.672371   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:32 GMT
	I0612 15:03:32.672733   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:32.672869   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:32.673415   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:32.673415   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:32.673415   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:32.676109   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:03:32.676109   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:32.676109   13752 round_trippers.go:580]     Audit-Id: 4bb42155-6b90-494f-8aeb-1d78c67a8b1c
	I0612 15:03:32.676109   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:32.676109   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:32.676109   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:32.676109   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:32.676109   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:32 GMT
	I0612 15:03:32.676109   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:33.162399   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:33.162399   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:33.162642   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:33.162642   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:33.162953   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:33.166722   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:33.166722   13752 round_trippers.go:580]     Audit-Id: ab2525dc-7384-400e-907f-b2310b507413
	I0612 15:03:33.166722   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:33.166722   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:33.166722   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:33.166722   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:33.166722   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:33 GMT
	I0612 15:03:33.166894   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:33.167602   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:33.167705   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:33.167705   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:33.167705   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:33.168431   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:33.168431   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:33.168431   13752 round_trippers.go:580]     Audit-Id: 22a59fb6-9e12-4cac-83b4-38c00b4f1caf
	I0612 15:03:33.168431   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:33.168431   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:33.170997   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:33.170997   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:33.170997   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:33 GMT
	I0612 15:03:33.171164   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:33.171605   13752 pod_ready.go:102] pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace has status "Ready":"False"
	I0612 15:03:33.662817   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:33.663061   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:33.663061   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:33.663061   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:33.667430   13752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 15:03:33.667430   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:33.667533   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:33 GMT
	I0612 15:03:33.667533   13752 round_trippers.go:580]     Audit-Id: 17e14e45-799f-4791-bf4e-894761c5907a
	I0612 15:03:33.667533   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:33.667533   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:33.667533   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:33.667533   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:33.667913   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:33.668661   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:33.668661   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:33.668661   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:33.668661   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:33.672196   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:03:33.672196   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:33.672196   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:33.672310   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:33 GMT
	I0612 15:03:33.672310   13752 round_trippers.go:580]     Audit-Id: e15f3616-0b68-48be-95d2-c8a925c3ca63
	I0612 15:03:33.672310   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:33.672310   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:33.672310   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:33.672310   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:34.166108   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:34.166193   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:34.166193   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:34.166193   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:34.166631   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:34.166631   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:34.166631   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:34.170700   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:34.170700   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:34.170700   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:34 GMT
	I0612 15:03:34.170700   13752 round_trippers.go:580]     Audit-Id: e0f9593a-25b5-4d23-aca0-04f229d01366
	I0612 15:03:34.170764   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:34.170764   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:34.171558   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:34.171558   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:34.172083   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:34.172083   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:34.173038   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:34.173038   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:34.173038   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:34.173038   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:34.173038   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:34 GMT
	I0612 15:03:34.175557   13752 round_trippers.go:580]     Audit-Id: f7732646-23ec-4876-97eb-3274c097813c
	I0612 15:03:34.175557   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:34.175557   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:34.175816   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:34.671090   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:34.671322   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:34.671388   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:34.671388   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:34.672211   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:34.672211   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:34.672211   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:34 GMT
	I0612 15:03:34.672211   13752 round_trippers.go:580]     Audit-Id: dc56804b-3732-4dcb-a2b4-687357475b3f
	I0612 15:03:34.672211   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:34.672211   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:34.672211   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:34.675789   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:34.676077   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:34.676868   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:34.676940   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:34.676940   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:34.676940   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:34.677217   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:34.677217   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:34.677217   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:34.677217   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:34.677217   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:34.677217   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:34 GMT
	I0612 15:03:34.677217   13752 round_trippers.go:580]     Audit-Id: 199db18b-d3bd-4b47-b728-480bf6a2aa33
	I0612 15:03:34.677217   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:34.680346   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:35.163175   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:35.163332   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:35.163380   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:35.163380   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:35.168958   13752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 15:03:35.170213   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:35.170213   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:35.170213   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:35.170213   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:35.170213   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:35.170213   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:35 GMT
	I0612 15:03:35.170213   13752 round_trippers.go:580]     Audit-Id: 5b39cf61-2da0-41a4-b851-b18b39a16cfa
	I0612 15:03:35.170404   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:35.171299   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:35.171299   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:35.171299   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:35.171299   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:35.171549   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:35.171549   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:35.171549   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:35.174082   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:35.174082   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:35 GMT
	I0612 15:03:35.174082   13752 round_trippers.go:580]     Audit-Id: 95fe8234-6341-4f69-9815-d6e98a9d2745
	I0612 15:03:35.174082   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:35.174082   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:35.174082   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:35.174866   13752 pod_ready.go:102] pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace has status "Ready":"False"
	I0612 15:03:35.671898   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:35.671973   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:35.672001   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:35.672001   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:35.676163   13752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 15:03:35.676163   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:35.676163   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:35 GMT
	I0612 15:03:35.676163   13752 round_trippers.go:580]     Audit-Id: 22db17fa-155d-453b-8b07-f5a4d24dac30
	I0612 15:03:35.676163   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:35.676163   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:35.676163   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:35.676163   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:35.677047   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:35.677808   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:35.677808   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:35.677808   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:35.677808   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:35.678585   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:35.678585   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:35.678585   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:35.678585   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:35 GMT
	I0612 15:03:35.678585   13752 round_trippers.go:580]     Audit-Id: ef861691-e177-4246-9077-55910de0c84f
	I0612 15:03:35.678585   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:35.678585   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:35.678585   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:35.681284   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:36.168520   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:36.168622   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:36.168656   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:36.168656   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:36.169557   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:36.173070   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:36.173114   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:36.173114   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:36 GMT
	I0612 15:03:36.173114   13752 round_trippers.go:580]     Audit-Id: 2c5d9972-7efd-4648-948a-2efb0385b346
	I0612 15:03:36.173114   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:36.173114   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:36.173114   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:36.173367   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:36.174321   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:36.174321   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:36.174321   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:36.174321   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:36.175573   13752 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 15:03:36.175573   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:36.175573   13752 round_trippers.go:580]     Audit-Id: 2d0dc48c-40ec-4624-98d9-01a07cfffc4a
	I0612 15:03:36.175573   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:36.177169   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:36.177169   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:36.177169   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:36.177169   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:36 GMT
	I0612 15:03:36.177614   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:36.672850   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:36.673104   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:36.673104   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:36.673104   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:36.678512   13752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 15:03:36.678512   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:36.678512   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:36 GMT
	I0612 15:03:36.678512   13752 round_trippers.go:580]     Audit-Id: bff89353-8d88-42f5-b65f-29c64d596196
	I0612 15:03:36.678512   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:36.678512   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:36.678512   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:36.678512   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:36.679387   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:36.680271   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:36.680315   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:36.680315   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:36.680315   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:36.687672   13752 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 15:03:36.687672   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:36.687891   13752 round_trippers.go:580]     Audit-Id: 18d95d95-fdf3-4519-b29d-af9c6d701622
	I0612 15:03:36.687891   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:36.687891   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:36.687891   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:36.687891   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:36.687891   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:36 GMT
	I0612 15:03:36.687891   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:37.166716   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:37.166949   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:37.166949   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:37.166949   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:37.171371   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:37.171371   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:37.171371   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:37.171371   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:37.171371   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:37.171478   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:37 GMT
	I0612 15:03:37.171478   13752 round_trippers.go:580]     Audit-Id: b1f35ec0-7aa8-43c2-b471-de673d559313
	I0612 15:03:37.171478   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:37.171634   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:37.172469   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:37.172469   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:37.172557   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:37.172557   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:37.172999   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:37.175088   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:37.175088   13752 round_trippers.go:580]     Audit-Id: 09c0339d-397e-43c7-a0ca-bb7484a112de
	I0612 15:03:37.175088   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:37.175088   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:37.175088   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:37.175088   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:37.175088   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:37 GMT
	I0612 15:03:37.175603   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:37.176144   13752 pod_ready.go:102] pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace has status "Ready":"False"
	I0612 15:03:37.661218   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:37.661218   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:37.661218   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:37.661218   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:37.662019   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:37.666220   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:37.666220   13752 round_trippers.go:580]     Audit-Id: 12526e9b-a403-4fc0-a0eb-c834dfe65931
	I0612 15:03:37.666220   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:37.666220   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:37.666220   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:37.666220   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:37.666220   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:37 GMT
	I0612 15:03:37.666361   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1784","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0612 15:03:37.667294   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:37.667350   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:37.667350   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:37.667350   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:37.667603   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:37.671004   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:37.671004   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:37.671004   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:37.671004   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:37.671004   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:37.671004   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:37 GMT
	I0612 15:03:37.671004   13752 round_trippers.go:580]     Audit-Id: add93adc-543b-4e99-b0f5-6a8b83dd9038
	I0612 15:03:37.671304   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:38.168882   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:03:38.168882   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:38.168882   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:38.168882   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:38.175785   13752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 15:03:38.175785   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:38.175918   13752 round_trippers.go:580]     Audit-Id: 91c58ae4-1dd8-48d7-9b0a-bfaa5a58ab78
	I0612 15:03:38.175918   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:38.175918   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:38.175918   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:38.175918   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:38.175918   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:38 GMT
	I0612 15:03:38.176170   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1975","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6790 chars]
	I0612 15:03:38.177199   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:38.177199   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:38.177304   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:38.177304   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:38.181021   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:03:38.181021   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:38.181021   13752 round_trippers.go:580]     Audit-Id: a8a4ed45-6f84-46de-8bf7-daa3f43c4e0c
	I0612 15:03:38.181196   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:38.181196   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:38.181196   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:38.181196   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:38.181196   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:38 GMT
	I0612 15:03:38.181356   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:38.182165   13752 pod_ready.go:92] pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace has status "Ready":"True"
	I0612 15:03:38.182251   13752 pod_ready.go:81] duration metric: took 26.0261131s for pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace to be "Ready" ...
	I0612 15:03:38.182251   13752 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:03:38.182399   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-025000
	I0612 15:03:38.182399   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:38.182399   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:38.182399   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:38.184688   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:38.184688   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:38.184688   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:38 GMT
	I0612 15:03:38.184688   13752 round_trippers.go:580]     Audit-Id: d099db0c-abaf-4bd9-ad98-9fd0791086dd
	I0612 15:03:38.184688   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:38.184688   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:38.184901   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:38.184901   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:38.184901   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-025000","namespace":"kube-system","uid":"be41c4a6-88ce-4e08-9b7c-16c0b4f3eec2","resourceVersion":"1875","creationTimestamp":"2024-06-12T22:02:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.200.184:2379","kubernetes.io/config.hash":"7b6b5637642f3d915c0db1461c7074e6","kubernetes.io/config.mirror":"7b6b5637642f3d915c0db1461c7074e6","kubernetes.io/config.seen":"2024-06-12T22:02:25.563300686Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6171 chars]
	I0612 15:03:38.185721   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:38.185721   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:38.185721   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:38.185721   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:38.187326   13752 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 15:03:38.187326   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:38.187326   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:38.187326   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:38.188320   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:38.188320   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:38.188320   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:38 GMT
	I0612 15:03:38.188320   13752 round_trippers.go:580]     Audit-Id: 831ae8ba-c4cd-48aa-a7f2-1efe4660d320
	I0612 15:03:38.188394   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:38.188394   13752 pod_ready.go:92] pod "etcd-multinode-025000" in "kube-system" namespace has status "Ready":"True"
	I0612 15:03:38.188394   13752 pod_ready.go:81] duration metric: took 6.143ms for pod "etcd-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:03:38.188942   13752 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:03:38.189097   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-025000
	I0612 15:03:38.189119   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:38.189119   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:38.189155   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:38.191467   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:03:38.192826   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:38.192826   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:38 GMT
	I0612 15:03:38.192826   13752 round_trippers.go:580]     Audit-Id: 0da92277-168f-40e1-ac80-ea72ae98a736
	I0612 15:03:38.192901   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:38.192901   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:38.192901   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:38.192901   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:38.192901   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-025000","namespace":"kube-system","uid":"63e55411-d432-4e5a-becc-fae0887fecae","resourceVersion":"1897","creationTimestamp":"2024-06-12T22:02:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.200.184:8443","kubernetes.io/config.hash":"d6071cd4356268889f798790dc93ce06","kubernetes.io/config.mirror":"d6071cd4356268889f798790dc93ce06","kubernetes.io/config.seen":"2024-06-12T22:02:25.478872091Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7705 chars]
	I0612 15:03:38.193548   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:38.193548   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:38.193548   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:38.193548   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:38.199802   13752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 15:03:38.199802   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:38.199887   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:38 GMT
	I0612 15:03:38.199887   13752 round_trippers.go:580]     Audit-Id: 9e2e23e7-7222-442d-bcf7-98ee76952a75
	I0612 15:03:38.199887   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:38.199887   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:38.199887   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:38.199887   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:38.199887   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:38.200459   13752 pod_ready.go:92] pod "kube-apiserver-multinode-025000" in "kube-system" namespace has status "Ready":"True"
	I0612 15:03:38.200565   13752 pod_ready.go:81] duration metric: took 11.6229ms for pod "kube-apiserver-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:03:38.200565   13752 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:03:38.200644   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-025000
	I0612 15:03:38.200644   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:38.200719   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:38.200719   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:38.203896   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:03:38.203896   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:38.203896   13752 round_trippers.go:580]     Audit-Id: f8136e95-ab81-4cfc-9502-ee69c96ac001
	I0612 15:03:38.203896   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:38.203896   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:38.203896   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:38.203896   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:38.203896   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:38 GMT
	I0612 15:03:38.203896   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-025000","namespace":"kube-system","uid":"68c9aa4f-49ee-439c-ad51-7943e65c0085","resourceVersion":"1895","creationTimestamp":"2024-06-12T21:39:30Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"88de11d8b1aaec126153d44e87c4b5dd","kubernetes.io/config.mirror":"88de11d8b1aaec126153d44e87c4b5dd","kubernetes.io/config.seen":"2024-06-12T21:39:23.999674614Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0612 15:03:38.205008   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:38.205073   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:38.205073   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:38.205073   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:38.206444   13752 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0612 15:03:38.207706   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:38.207706   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:38 GMT
	I0612 15:03:38.207706   13752 round_trippers.go:580]     Audit-Id: 671224cc-e5a0-44e9-842f-a707d363cf63
	I0612 15:03:38.207706   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:38.207706   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:38.207706   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:38.207759   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:38.208004   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:38.208004   13752 pod_ready.go:92] pod "kube-controller-manager-multinode-025000" in "kube-system" namespace has status "Ready":"True"
	I0612 15:03:38.208004   13752 pod_ready.go:81] duration metric: took 7.4382ms for pod "kube-controller-manager-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:03:38.208004   13752 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-47lr8" in "kube-system" namespace to be "Ready" ...
	I0612 15:03:38.208591   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-proxy-47lr8
	I0612 15:03:38.208636   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:38.208636   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:38.208636   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:38.209284   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:38.209284   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:38.211393   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:38.211393   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:38.211393   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:38.211437   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:38 GMT
	I0612 15:03:38.211437   13752 round_trippers.go:580]     Audit-Id: cb34be29-04ce-4a5b-b7d8-f47e54f40eb9
	I0612 15:03:38.211437   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:38.211588   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-47lr8","generateName":"kube-proxy-","namespace":"kube-system","uid":"10b24fa7-8eea-4fbb-ab18-404e853aa7ab","resourceVersion":"1793","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b44c21bc-e2cc-415b-bc2f-616adabe0681","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b44c21bc-e2cc-415b-bc2f-616adabe0681\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0612 15:03:38.211793   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:38.212407   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:38.212477   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:38.212477   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:38.215193   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:03:38.215193   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:38.215193   13752 round_trippers.go:580]     Audit-Id: 1ac66015-7962-4bde-832e-bd0d2a552f90
	I0612 15:03:38.215193   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:38.215193   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:38.215193   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:38.215193   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:38.215193   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:38 GMT
	I0612 15:03:38.215193   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:38.216034   13752 pod_ready.go:92] pod "kube-proxy-47lr8" in "kube-system" namespace has status "Ready":"True"
	I0612 15:03:38.216034   13752 pod_ready.go:81] duration metric: took 8.0304ms for pod "kube-proxy-47lr8" in "kube-system" namespace to be "Ready" ...
	I0612 15:03:38.216095   13752 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7jwdg" in "kube-system" namespace to be "Ready" ...
	I0612 15:03:38.369754   13752 request.go:629] Waited for 153.3034ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7jwdg
	I0612 15:03:38.369962   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7jwdg
	I0612 15:03:38.369962   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:38.369962   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:38.370080   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:38.373867   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:03:38.373867   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:38.373867   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:38.373867   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:38.373867   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:38.373867   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:38.373867   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:38 GMT
	I0612 15:03:38.373867   13752 round_trippers.go:580]     Audit-Id: 20d56aed-e8ac-4ea1-81c6-7eaa4818e6d1
	I0612 15:03:38.373867   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7jwdg","generateName":"kube-proxy-","namespace":"kube-system","uid":"643030f7-b876-4243-bacc-04205e88cc9e","resourceVersion":"1748","creationTimestamp":"2024-06-12T21:47:16Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b44c21bc-e2cc-415b-bc2f-616adabe0681","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:47:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b44c21bc-e2cc-415b-bc2f-616adabe0681\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6062 chars]
	I0612 15:03:38.571854   13752 request.go:629] Waited for 196.5255ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m03
	I0612 15:03:38.571933   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m03
	I0612 15:03:38.571933   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:38.572061   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:38.572139   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:38.575522   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:38.575522   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:38.575522   13752 round_trippers.go:580]     Audit-Id: e52c8ecb-c0d5-4696-878c-dbeef778a857
	I0612 15:03:38.575522   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:38.575522   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:38.575522   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:38.575522   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:38.575522   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:38 GMT
	I0612 15:03:38.576324   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m03","uid":"9d457bc2-c46f-4b5d-8023-5c06ef6198c7","resourceVersion":"1913","creationTimestamp":"2024-06-12T21:57:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_57_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:57:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4399 chars]
	I0612 15:03:38.576915   13752 pod_ready.go:97] node "multinode-025000-m03" hosting pod "kube-proxy-7jwdg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000-m03" has status "Ready":"Unknown"
	I0612 15:03:38.576915   13752 pod_ready.go:81] duration metric: took 360.8183ms for pod "kube-proxy-7jwdg" in "kube-system" namespace to be "Ready" ...
	E0612 15:03:38.576915   13752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-025000-m03" hosting pod "kube-proxy-7jwdg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000-m03" has status "Ready":"Unknown"
	I0612 15:03:38.576915   13752 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tdcdp" in "kube-system" namespace to be "Ready" ...
	I0612 15:03:38.777788   13752 request.go:629] Waited for 200.6726ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tdcdp
	I0612 15:03:38.777994   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tdcdp
	I0612 15:03:38.777994   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:38.777994   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:38.777994   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:38.778288   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:38.781905   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:38.781905   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:38 GMT
	I0612 15:03:38.781905   13752 round_trippers.go:580]     Audit-Id: f4550265-74f1-439c-862d-82804d0fd473
	I0612 15:03:38.781905   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:38.782000   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:38.782000   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:38.782000   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:38.782145   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tdcdp","generateName":"kube-proxy-","namespace":"kube-system","uid":"b623833c-ce55-46b1-a840-99b3143adac1","resourceVersion":"1958","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b44c21bc-e2cc-415b-bc2f-616adabe0681","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b44c21bc-e2cc-415b-bc2f-616adabe0681\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0612 15:03:38.980317   13752 request.go:629] Waited for 196.4946ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:03:38.980386   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:03:38.980386   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:38.980386   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:38.980386   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:38.984899   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:38.984966   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:38.984966   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:38.984966   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:38.984966   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:38.984966   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:38.984966   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:38 GMT
	I0612 15:03:38.984966   13752 round_trippers.go:580]     Audit-Id: 2b126299-3eea-4452-adcd-9bf93ba6f4a3
	I0612 15:03:38.984966   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"795a4638-bf70-440d-a6a1-2f194ade7384","resourceVersion":"1963","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_42_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4583 chars]
	I0612 15:03:38.985740   13752 pod_ready.go:97] node "multinode-025000-m02" hosting pod "kube-proxy-tdcdp" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000-m02" has status "Ready":"Unknown"
	I0612 15:03:38.985740   13752 pod_ready.go:81] duration metric: took 408.8234ms for pod "kube-proxy-tdcdp" in "kube-system" namespace to be "Ready" ...
	E0612 15:03:38.985740   13752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-025000-m02" hosting pod "kube-proxy-tdcdp" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000-m02" has status "Ready":"Unknown"
	I0612 15:03:38.985740   13752 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:03:39.174622   13752 request.go:629] Waited for 188.6425ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-025000
	I0612 15:03:39.174717   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-025000
	I0612 15:03:39.174717   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:39.174717   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:39.174717   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:39.175262   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:39.175262   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:39.175262   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:39.178841   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:39.178899   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:39 GMT
	I0612 15:03:39.178899   13752 round_trippers.go:580]     Audit-Id: f93b5779-6b18-4a07-ab08-c9bdf4045d6a
	I0612 15:03:39.178899   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:39.178899   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:39.178899   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-025000","namespace":"kube-system","uid":"83b272cb-1286-47d8-bcb1-a66056dff2a5","resourceVersion":"1865","creationTimestamp":"2024-06-12T21:39:31Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"de62e7fd7d0feea82620e745032c1a67","kubernetes.io/config.mirror":"de62e7fd7d0feea82620e745032c1a67","kubernetes.io/config.seen":"2024-06-12T21:39:31.214466565Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0612 15:03:39.378032   13752 request.go:629] Waited for 198.2421ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:39.378433   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:03:39.378433   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:39.378433   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:39.378433   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:39.378810   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:39.378810   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:39.378810   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:39.382976   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:39.382976   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:39.382976   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:39.382976   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:39 GMT
	I0612 15:03:39.382976   13752 round_trippers.go:580]     Audit-Id: b2978915-00e9-4054-8ce5-53073014865e
	I0612 15:03:39.383106   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:03:39.383874   13752 pod_ready.go:92] pod "kube-scheduler-multinode-025000" in "kube-system" namespace has status "Ready":"True"
	I0612 15:03:39.383874   13752 pod_ready.go:81] duration metric: took 398.1329ms for pod "kube-scheduler-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:03:39.383943   13752 pod_ready.go:38] duration metric: took 27.2395096s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 15:03:39.383943   13752 api_server.go:52] waiting for apiserver process to appear ...
	I0612 15:03:39.393117   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0612 15:03:39.423915   13752 command_runner.go:130] > bbe2d2e51b5f
	I0612 15:03:39.428293   13752 logs.go:276] 1 containers: [bbe2d2e51b5f]
	I0612 15:03:39.437876   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0612 15:03:39.462543   13752 command_runner.go:130] > 6b61f5f6483d
	I0612 15:03:39.463172   13752 logs.go:276] 1 containers: [6b61f5f6483d]
	I0612 15:03:39.473204   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0612 15:03:39.497701   13752 command_runner.go:130] > 26e5daf354e3
	I0612 15:03:39.498780   13752 command_runner.go:130] > e83cf4eef49e
	I0612 15:03:39.498838   13752 logs.go:276] 2 containers: [26e5daf354e3 e83cf4eef49e]
	I0612 15:03:39.509299   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0612 15:03:39.533720   13752 command_runner.go:130] > 755750ecd1e3
	I0612 15:03:39.533720   13752 command_runner.go:130] > 6b021c195669
	I0612 15:03:39.535953   13752 logs.go:276] 2 containers: [755750ecd1e3 6b021c195669]
	I0612 15:03:39.546650   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0612 15:03:39.573376   13752 command_runner.go:130] > 227a905829b0
	I0612 15:03:39.573376   13752 command_runner.go:130] > c4842faba751
	I0612 15:03:39.573376   13752 logs.go:276] 2 containers: [227a905829b0 c4842faba751]
	I0612 15:03:39.581549   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0612 15:03:39.607553   13752 command_runner.go:130] > 7acc8ff0a931
	I0612 15:03:39.607553   13752 command_runner.go:130] > 685d167da53c
	I0612 15:03:39.607671   13752 logs.go:276] 2 containers: [7acc8ff0a931 685d167da53c]
	I0612 15:03:39.617593   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0612 15:03:39.646054   13752 command_runner.go:130] > cccfd1e9fef5
	I0612 15:03:39.646109   13752 command_runner.go:130] > 4d60d82f6bc5
	I0612 15:03:39.647036   13752 logs.go:276] 2 containers: [cccfd1e9fef5 4d60d82f6bc5]
	I0612 15:03:39.647083   13752 logs.go:123] Gathering logs for kube-controller-manager [7acc8ff0a931] ...
	I0612 15:03:39.647138   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7acc8ff0a931"
	I0612 15:03:39.670996   13752 command_runner.go:130] ! I0612 22:02:28.579013       1 serving.go:380] Generated self-signed cert in-memory
	I0612 15:03:39.670996   13752 command_runner.go:130] ! I0612 22:02:28.927149       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0612 15:03:39.670996   13752 command_runner.go:130] ! I0612 22:02:28.927184       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:39.674110   13752 command_runner.go:130] ! I0612 22:02:28.930688       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0612 15:03:39.674110   13752 command_runner.go:130] ! I0612 22:02:28.932993       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0612 15:03:39.674110   13752 command_runner.go:130] ! I0612 22:02:28.933167       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0612 15:03:39.674273   13752 command_runner.go:130] ! I0612 22:02:28.933539       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 15:03:39.674587   13752 command_runner.go:130] ! I0612 22:02:32.987820       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0612 15:03:39.675412   13752 command_runner.go:130] ! I0612 22:02:32.988653       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0612 15:03:39.675412   13752 command_runner.go:130] ! I0612 22:02:32.994458       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0612 15:03:39.675412   13752 command_runner.go:130] ! I0612 22:02:32.995780       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0612 15:03:39.675412   13752 command_runner.go:130] ! I0612 22:02:32.996873       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0612 15:03:39.675412   13752 command_runner.go:130] ! I0612 22:02:33.005703       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0612 15:03:39.675412   13752 command_runner.go:130] ! I0612 22:02:33.005720       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0612 15:03:39.675412   13752 command_runner.go:130] ! I0612 22:02:33.006099       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0612 15:03:39.675412   13752 command_runner.go:130] ! I0612 22:02:33.006120       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0612 15:03:39.675412   13752 command_runner.go:130] ! I0612 22:02:33.011328       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0612 15:03:39.675412   13752 command_runner.go:130] ! I0612 22:02:33.013199       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0612 15:03:39.675412   13752 command_runner.go:130] ! I0612 22:02:33.013216       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0612 15:03:39.675412   13752 command_runner.go:130] ! W0612 22:02:33.045760       1 shared_informer.go:597] resyncPeriod 19h21m1.650821539s is smaller than resyncCheckPeriod 23h18m38.368150047s and the informer has already started. Changing it to 23h18m38.368150047s
	I0612 15:03:39.675412   13752 command_runner.go:130] ! I0612 22:02:33.046400       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0612 15:03:39.675942   13752 command_runner.go:130] ! I0612 22:02:33.046742       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0612 15:03:39.675982   13752 command_runner.go:130] ! I0612 22:02:33.047003       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0612 15:03:39.675982   13752 command_runner.go:130] ! I0612 22:02:33.047066       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0612 15:03:39.675982   13752 command_runner.go:130] ! I0612 22:02:33.047091       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0612 15:03:39.675982   13752 command_runner.go:130] ! I0612 22:02:33.047150       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0612 15:03:39.675982   13752 command_runner.go:130] ! I0612 22:02:33.047175       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0612 15:03:39.676089   13752 command_runner.go:130] ! I0612 22:02:33.047875       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0612 15:03:39.676089   13752 command_runner.go:130] ! I0612 22:02:33.048961       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0612 15:03:39.676150   13752 command_runner.go:130] ! I0612 22:02:33.049070       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.049108       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.049132       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.049173       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.049188       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.049203       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.049218       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.049235       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.049307       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0612 15:03:39.676177   13752 command_runner.go:130] ! W0612 22:02:33.049318       1 shared_informer.go:597] resyncPeriod 16h27m54.164006095s is smaller than resyncCheckPeriod 23h18m38.368150047s and the informer has already started. Changing it to 23h18m38.368150047s
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.049536       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.049616       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.049652       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.049852       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.049880       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.052188       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.075270       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.088124       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.088224       1 shared_informer.go:313] Waiting for caches to sync for job
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.088312       1 shared_informer.go:320] Caches are synced for tokens
	I0612 15:03:39.676177   13752 command_runner.go:130] ! I0612 22:02:33.092469       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0612 15:03:39.676764   13752 command_runner.go:130] ! I0612 22:02:33.093016       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0612 15:03:39.676764   13752 command_runner.go:130] ! I0612 22:02:33.093183       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.099173       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.099288       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.099302       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.099269       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.099467       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.102279       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.103692       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.103797       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.109335       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.109737       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.109801       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.109811       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.113018       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.114442       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.114573       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.118932       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.118955       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.118979       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.119791       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.121411       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.119985       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.122332       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0612 15:03:39.676859   13752 command_runner.go:130] ! I0612 22:02:33.122409       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0612 15:03:39.677454   13752 command_runner.go:130] ! I0612 22:02:33.122432       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:39.677454   13752 command_runner.go:130] ! I0612 22:02:33.122572       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0612 15:03:39.677674   13752 command_runner.go:130] ! I0612 22:02:33.122710       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0612 15:03:39.677674   13752 command_runner.go:130] ! I0612 22:02:33.122722       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0612 15:03:39.677674   13752 command_runner.go:130] ! I0612 22:02:33.122748       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:39.677674   13752 command_runner.go:130] ! I0612 22:02:33.132412       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0612 15:03:39.677674   13752 command_runner.go:130] ! I0612 22:02:33.132517       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0612 15:03:39.677674   13752 command_runner.go:130] ! I0612 22:02:33.132620       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0612 15:03:39.677674   13752 command_runner.go:130] ! I0612 22:02:33.132660       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0612 15:03:39.677674   13752 command_runner.go:130] ! I0612 22:02:33.132669       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0612 15:03:39.677674   13752 command_runner.go:130] ! I0612 22:02:33.139478       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0612 15:03:39.677674   13752 command_runner.go:130] ! I0612 22:02:33.139854       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0612 15:03:39.677674   13752 command_runner.go:130] ! I0612 22:02:33.140261       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0612 15:03:39.677674   13752 command_runner.go:130] ! I0612 22:02:33.169621       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0612 15:03:39.677674   13752 command_runner.go:130] ! I0612 22:02:33.169819       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0612 15:03:39.678228   13752 command_runner.go:130] ! I0612 22:02:33.169849       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0612 15:03:39.678228   13752 command_runner.go:130] ! I0612 22:02:33.170074       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0612 15:03:39.678287   13752 command_runner.go:130] ! I0612 22:02:33.173816       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0612 15:03:39.678287   13752 command_runner.go:130] ! I0612 22:02:33.174120       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0612 15:03:39.678287   13752 command_runner.go:130] ! I0612 22:02:33.174130       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0612 15:03:39.678287   13752 command_runner.go:130] ! I0612 22:02:33.184678       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0612 15:03:39.678357   13752 command_runner.go:130] ! I0612 22:02:33.186030       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0612 15:03:39.678397   13752 command_runner.go:130] ! I0612 22:02:33.192152       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0612 15:03:39.678412   13752 command_runner.go:130] ! I0612 22:02:33.192257       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0612 15:03:39.678436   13752 command_runner.go:130] ! I0612 22:02:33.192268       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0612 15:03:39.678475   13752 command_runner.go:130] ! I0612 22:02:33.194361       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0612 15:03:39.678890   13752 command_runner.go:130] ! I0612 22:02:33.194659       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0612 15:03:39.678890   13752 command_runner.go:130] ! I0612 22:02:33.194671       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0612 15:03:39.678890   13752 command_runner.go:130] ! I0612 22:02:33.200378       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0612 15:03:39.678890   13752 command_runner.go:130] ! I0612 22:02:33.200552       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0612 15:03:39.678890   13752 command_runner.go:130] ! I0612 22:02:33.200579       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0612 15:03:39.678890   13752 command_runner.go:130] ! I0612 22:02:33.203400       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0612 15:03:39.678890   13752 command_runner.go:130] ! I0612 22:02:33.203797       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0612 15:03:39.678890   13752 command_runner.go:130] ! I0612 22:02:33.203967       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0612 15:03:39.678890   13752 command_runner.go:130] ! I0612 22:02:33.207566       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0612 15:03:39.678890   13752 command_runner.go:130] ! I0612 22:02:33.207732       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0612 15:03:39.678890   13752 command_runner.go:130] ! I0612 22:02:33.207743       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0612 15:03:39.678890   13752 command_runner.go:130] ! I0612 22:02:33.207766       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0612 15:03:39.678890   13752 command_runner.go:130] ! I0612 22:02:33.214389       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.214572       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.214655       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.220603       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.221181       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.222958       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0612 15:03:39.679662   13752 command_runner.go:130] ! E0612 22:02:33.228603       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.228994       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.253059       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.253281       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.253292       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.264081       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.266480       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.266606       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.266742       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.380173       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.380458       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.380796       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.398346       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.401718       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.401737       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.495874       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0612 15:03:39.679662   13752 command_runner.go:130] ! I0612 22:02:33.496386       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0612 15:03:39.680256   13752 command_runner.go:130] ! I0612 22:02:33.498064       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0612 15:03:39.680511   13752 command_runner.go:130] ! I0612 22:02:33.698817       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0612 15:03:39.680511   13752 command_runner.go:130] ! I0612 22:02:33.699215       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0612 15:03:39.680511   13752 command_runner.go:130] ! I0612 22:02:33.699646       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0612 15:03:39.680511   13752 command_runner.go:130] ! I0612 22:02:33.744449       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0612 15:03:39.681086   13752 command_runner.go:130] ! I0612 22:02:33.744531       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0612 15:03:39.681143   13752 command_runner.go:130] ! I0612 22:02:33.744546       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0612 15:03:39.681143   13752 command_runner.go:130] ! E0612 22:02:33.807267       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0612 15:03:39.681143   13752 command_runner.go:130] ! I0612 22:02:33.807295       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0612 15:03:39.681143   13752 command_runner.go:130] ! I0612 22:02:33.856639       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0612 15:03:39.684604   13752 command_runner.go:130] ! I0612 22:02:33.857088       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0612 15:03:39.684604   13752 command_runner.go:130] ! I0612 22:02:33.857273       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0612 15:03:39.684604   13752 command_runner.go:130] ! I0612 22:02:33.894016       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0612 15:03:39.685165   13752 command_runner.go:130] ! I0612 22:02:33.896048       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0612 15:03:39.685165   13752 command_runner.go:130] ! I0612 22:02:33.896083       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0612 15:03:39.685165   13752 command_runner.go:130] ! I0612 22:02:33.950707       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0612 15:03:39.685214   13752 command_runner.go:130] ! I0612 22:02:33.950731       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0612 15:03:39.685271   13752 command_runner.go:130] ! I0612 22:02:33.950771       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:33.950821       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:33.950870       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:33.995005       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:33.995247       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.062766       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.063067       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.063362       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.063411       1 shared_informer.go:313] Waiting for caches to sync for node
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.068203       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.068603       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.068777       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.071309       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.071638       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.071795       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.080804       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.097810       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.100018       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.100030       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.102193       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000\" does not exist"
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.102337       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m02\" does not exist"
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.102640       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.102796       1 shared_informer.go:320] Caches are synced for TTL
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.102925       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m03\" does not exist"
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.102986       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.113771       1 shared_informer.go:320] Caches are synced for GC
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.115010       1 shared_informer.go:320] Caches are synced for endpoint
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.115463       1 shared_informer.go:320] Caches are synced for cronjob
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.119062       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0612 15:03:39.685341   13752 command_runner.go:130] ! I0612 22:02:44.121259       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0612 15:03:39.685888   13752 command_runner.go:130] ! I0612 22:02:44.124526       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0612 15:03:39.685888   13752 command_runner.go:130] ! I0612 22:02:44.124650       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0612 15:03:39.685888   13752 command_runner.go:130] ! I0612 22:02:44.124971       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0612 15:03:39.685888   13752 command_runner.go:130] ! I0612 22:02:44.126246       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0612 15:03:39.685949   13752 command_runner.go:130] ! I0612 22:02:44.133682       1 shared_informer.go:320] Caches are synced for taint
	I0612 15:03:39.685949   13752 command_runner.go:130] ! I0612 22:02:44.134026       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0612 15:03:39.685949   13752 command_runner.go:130] ! I0612 22:02:44.141044       1 shared_informer.go:320] Caches are synced for service account
	I0612 15:03:39.685949   13752 command_runner.go:130] ! I0612 22:02:44.145563       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0612 15:03:39.685949   13752 command_runner.go:130] ! I0612 22:02:44.158513       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0612 15:03:39.686027   13752 command_runner.go:130] ! I0612 22:02:44.162319       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000"
	I0612 15:03:39.686027   13752 command_runner.go:130] ! I0612 22:02:44.162613       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000-m02"
	I0612 15:03:39.686027   13752 command_runner.go:130] ! I0612 22:02:44.162653       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000-m03"
	I0612 15:03:39.686027   13752 command_runner.go:130] ! I0612 22:02:44.163186       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0612 15:03:39.686027   13752 command_runner.go:130] ! I0612 22:02:44.164074       1 shared_informer.go:320] Caches are synced for node
	I0612 15:03:39.686027   13752 command_runner.go:130] ! I0612 22:02:44.164451       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0612 15:03:39.686027   13752 command_runner.go:130] ! I0612 22:02:44.164672       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0612 15:03:39.686027   13752 command_runner.go:130] ! I0612 22:02:44.164769       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0612 15:03:39.686027   13752 command_runner.go:130] ! I0612 22:02:44.164780       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0612 15:03:39.686027   13752 command_runner.go:130] ! I0612 22:02:44.167842       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0612 15:03:39.686027   13752 command_runner.go:130] ! I0612 22:02:44.174384       1 shared_informer.go:320] Caches are synced for daemon sets
	I0612 15:03:39.686027   13752 command_runner.go:130] ! I0612 22:02:44.182521       1 shared_informer.go:320] Caches are synced for namespace
	I0612 15:03:39.686027   13752 command_runner.go:130] ! I0612 22:02:44.186460       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0612 15:03:39.686027   13752 command_runner.go:130] ! I0612 22:02:44.194992       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0612 15:03:39.686027   13752 command_runner.go:130] ! I0612 22:02:44.196327       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0612 15:03:39.686660   13752 command_runner.go:130] ! I0612 22:02:44.196530       1 shared_informer.go:320] Caches are synced for job
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.196665       1 shared_informer.go:320] Caches are synced for deployment
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.200768       1 shared_informer.go:320] Caches are synced for HPA
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.200988       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.201846       1 shared_informer.go:320] Caches are synced for PV protection
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.207493       1 shared_informer.go:320] Caches are synced for crt configmap
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.228051       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="25.792655ms"
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.231633       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="89.306µs"
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.244808       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.644732ms"
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.246402       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.002µs"
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.297636       1 shared_informer.go:320] Caches are synced for PVC protection
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.304265       1 shared_informer.go:320] Caches are synced for stateful set
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.304486       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.311023       1 shared_informer.go:320] Caches are synced for disruption
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.350865       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.351039       1 shared_informer.go:320] Caches are synced for ephemeral
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.353535       1 shared_informer.go:320] Caches are synced for persistent volume
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.369296       1 shared_informer.go:320] Caches are synced for attach detach
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.372273       1 shared_informer.go:320] Caches are synced for expand
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.381442       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.821842       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.870923       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:02:44.871005       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:03:11.878868       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:03:24.254264       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.921834ms"
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:03:24.256639       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.601µs"
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:03:37.832133       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="82.001µs"
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:03:37.905221       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.518825ms"
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:03:37.905853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.201µs"
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:03:37.917312       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.821108ms"
	I0612 15:03:39.686699   13752 command_runner.go:130] ! I0612 22:03:37.917472       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.3µs"
	I0612 15:03:39.705861   13752 logs.go:123] Gathering logs for kube-controller-manager [685d167da53c] ...
	I0612 15:03:39.707413   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 685d167da53c"
	I0612 15:03:39.734182   13752 command_runner.go:130] ! I0612 21:39:26.275086       1 serving.go:380] Generated self-signed cert in-memory
	I0612 15:03:39.742015   13752 command_runner.go:130] ! I0612 21:39:26.758419       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:26.759036       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:26.761311       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:26.761663       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:26.762454       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:26.762652       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.260969       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.261096       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0612 15:03:39.742106   13752 command_runner.go:130] ! E0612 21:39:31.316508       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.316587       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.342032       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.342287       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.342304       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.362243       1 shared_informer.go:320] Caches are synced for tokens
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.399024       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.399081       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.399264       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.443376       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.443603       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.443617       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.480477       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.480993       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.481007       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.523943       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.524182       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.524535       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.524741       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.553194       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.554412       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.556852       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.560273       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.560448       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0612 15:03:39.742106   13752 command_runner.go:130] ! I0612 21:39:31.561614       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0612 15:03:39.742629   13752 command_runner.go:130] ! I0612 21:39:31.561933       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.593308       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.593438       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.593459       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.593488       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.593534       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.593588       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.593611       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.593650       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.593684       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.593701       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.593721       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.593739       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.593950       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.594051       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.594202       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.594262       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.594286       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.594306       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.594500       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.594602       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.594857       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.594957       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0612 15:03:39.742721   13752 command_runner.go:130] ! I0612 21:39:31.595276       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0612 15:03:39.743244   13752 command_runner.go:130] ! I0612 21:39:31.595463       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:31.605247       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:31.605722       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:31.607199       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:31.668704       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:31.669329       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:31.669521       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:31.820968       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:31.821104       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:31.821117       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:31.973500       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:31.973543       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:31.975344       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:31.975377       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:32.163715       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:32.163860       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:32.320380       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:32.320516       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:32.320529       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:32.468817       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:32.468893       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:32.636144       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:32.636921       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:32.637331       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:32.775300       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:32.776007       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:32.778803       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:32.920254       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:32.920359       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:32.920902       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0612 15:03:39.743358   13752 command_runner.go:130] ! I0612 21:39:33.069533       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0612 15:03:39.743880   13752 command_runner.go:130] ! I0612 21:39:33.069689       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0612 15:03:39.743880   13752 command_runner.go:130] ! I0612 21:39:33.069704       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0612 15:03:39.743880   13752 command_runner.go:130] ! I0612 21:39:33.069713       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0612 15:03:39.743880   13752 command_runner.go:130] ! I0612 21:39:33.115693       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0612 15:03:39.744052   13752 command_runner.go:130] ! I0612 21:39:33.115796       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:33.115809       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:33.116021       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:33.116257       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:33.116416       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:33.169481       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:33.169523       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:33.169561       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:33.170619       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:33.170693       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:33.170745       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:33.171426       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:33.171458       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:33.171479       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:33.172032       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:33.172160       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:33.172352       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:33.172295       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:43.229790       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:43.230104       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:43.230715       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0612 15:03:39.744167   13752 command_runner.go:130] ! I0612 21:39:43.230868       1 shared_informer.go:313] Waiting for caches to sync for node
	I0612 15:03:39.744167   13752 command_runner.go:130] ! E0612 21:39:43.246433       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0612 15:03:39.744691   13752 command_runner.go:130] ! I0612 21:39:43.246740       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0612 15:03:39.744818   13752 command_runner.go:130] ! I0612 21:39:43.246878       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.247178       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.259694       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.260105       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.260326       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.287038       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.287747       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.289545       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.296881       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.297485       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.297679       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.315673       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.316362       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.316724       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.331329       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.331610       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.331966       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.358081       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.358485       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.358595       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.358609       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.373221       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.373371       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.373388       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.386049       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.386265       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.387457       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.473855       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.474115       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.474421       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.622457       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.622831       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.622950       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0612 15:03:39.744986   13752 command_runner.go:130] ! I0612 21:39:43.776632       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0612 15:03:39.745696   13752 command_runner.go:130] ! I0612 21:39:43.777149       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0612 15:03:39.745742   13752 command_runner.go:130] ! I0612 21:39:43.777203       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:43.923199       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:43.923416       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:43.923557       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.219008       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.219041       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.219093       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.219104       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.375322       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.375879       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.375896       1 shared_informer.go:313] Waiting for caches to sync for job
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.419335       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.419357       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.419672       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.435364       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.441191       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000\" does not exist"
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.456985       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.457052       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.460648       1 shared_informer.go:320] Caches are synced for GC
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.463138       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.469825       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.469846       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.469856       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.471608       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.471748       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.472789       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.474041       1 shared_informer.go:320] Caches are synced for TTL
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.475483       1 shared_informer.go:320] Caches are synced for PVC protection
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.475505       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.476080       1 shared_informer.go:320] Caches are synced for job
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.479252       1 shared_informer.go:320] Caches are synced for ephemeral
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.481788       1 shared_informer.go:320] Caches are synced for service account
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.488300       1 shared_informer.go:320] Caches are synced for persistent volume
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.491059       1 shared_informer.go:320] Caches are synced for namespace
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.499063       1 shared_informer.go:320] Caches are synced for cronjob
	I0612 15:03:39.745782   13752 command_runner.go:130] ! I0612 21:39:44.500304       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0612 15:03:39.746367   13752 command_runner.go:130] ! I0612 21:39:44.507471       1 shared_informer.go:320] Caches are synced for daemon sets
	I0612 15:03:39.746367   13752 command_runner.go:130] ! I0612 21:39:44.525355       1 shared_informer.go:320] Caches are synced for taint
	I0612 15:03:39.746367   13752 command_runner.go:130] ! I0612 21:39:44.525889       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0612 15:03:39.746367   13752 command_runner.go:130] ! I0612 21:39:44.526177       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000"
	I0612 15:03:39.746367   13752 command_runner.go:130] ! I0612 21:39:44.526390       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0612 15:03:39.746367   13752 command_runner.go:130] ! I0612 21:39:44.526550       1 shared_informer.go:320] Caches are synced for HPA
	I0612 15:03:39.746367   13752 command_runner.go:130] ! I0612 21:39:44.526951       1 shared_informer.go:320] Caches are synced for stateful set
	I0612 15:03:39.746367   13752 command_runner.go:130] ! I0612 21:39:44.527038       1 shared_informer.go:320] Caches are synced for deployment
	I0612 15:03:39.746367   13752 command_runner.go:130] ! I0612 21:39:44.528601       1 shared_informer.go:320] Caches are synced for PV protection
	I0612 15:03:39.746367   13752 command_runner.go:130] ! I0612 21:39:44.528834       1 shared_informer.go:320] Caches are synced for crt configmap
	I0612 15:03:39.746536   13752 command_runner.go:130] ! I0612 21:39:44.531261       1 shared_informer.go:320] Caches are synced for node
	I0612 15:03:39.746536   13752 command_runner.go:130] ! I0612 21:39:44.531462       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0612 15:03:39.746536   13752 command_runner.go:130] ! I0612 21:39:44.531679       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0612 15:03:39.746536   13752 command_runner.go:130] ! I0612 21:39:44.531942       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0612 15:03:39.746536   13752 command_runner.go:130] ! I0612 21:39:44.532097       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0612 15:03:39.746622   13752 command_runner.go:130] ! I0612 21:39:44.532523       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0612 15:03:39.746622   13752 command_runner.go:130] ! I0612 21:39:44.537873       1 shared_informer.go:320] Caches are synced for expand
	I0612 15:03:39.746622   13752 command_runner.go:130] ! I0612 21:39:44.543447       1 shared_informer.go:320] Caches are synced for attach detach
	I0612 15:03:39.746622   13752 command_runner.go:130] ! I0612 21:39:44.564610       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0612 15:03:39.746622   13752 command_runner.go:130] ! I0612 21:39:44.568950       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025000" podCIDRs=["10.244.0.0/24"]
	I0612 15:03:39.746709   13752 command_runner.go:130] ! I0612 21:39:44.621264       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0612 15:03:39.746709   13752 command_runner.go:130] ! I0612 21:39:44.644803       1 shared_informer.go:320] Caches are synced for endpoint
	I0612 15:03:39.746709   13752 command_runner.go:130] ! I0612 21:39:44.677466       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0612 15:03:39.746709   13752 command_runner.go:130] ! I0612 21:39:44.696400       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 15:03:39.746709   13752 command_runner.go:130] ! I0612 21:39:44.723303       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0612 15:03:39.746709   13752 command_runner.go:130] ! I0612 21:39:44.735837       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 15:03:39.746789   13752 command_runner.go:130] ! I0612 21:39:44.758870       1 shared_informer.go:320] Caches are synced for disruption
	I0612 15:03:39.746789   13752 command_runner.go:130] ! I0612 21:39:45.157877       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 15:03:39.746789   13752 command_runner.go:130] ! I0612 21:39:45.226557       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 15:03:39.746789   13752 command_runner.go:130] ! I0612 21:39:45.226973       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0612 15:03:39.746789   13752 command_runner.go:130] ! I0612 21:39:45.795416       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="243.746414ms"
	I0612 15:03:39.746887   13752 command_runner.go:130] ! I0612 21:39:45.868449       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.90937ms"
	I0612 15:03:39.746887   13752 command_runner.go:130] ! I0612 21:39:45.868845       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="122.402µs"
	I0612 15:03:39.746887   13752 command_runner.go:130] ! I0612 21:39:45.869382       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="206.903µs"
	I0612 15:03:39.746963   13752 command_runner.go:130] ! I0612 21:39:45.905402       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="386.807µs"
	I0612 15:03:39.746963   13752 command_runner.go:130] ! I0612 21:39:46.349409       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="105.452815ms"
	I0612 15:03:39.746963   13752 command_runner.go:130] ! I0612 21:39:46.386321       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.301621ms"
	I0612 15:03:39.746963   13752 command_runner.go:130] ! I0612 21:39:46.386974       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="616.309µs"
	I0612 15:03:39.747039   13752 command_runner.go:130] ! I0612 21:39:56.441072       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="366.601µs"
	I0612 15:03:39.747039   13752 command_runner.go:130] ! I0612 21:39:56.465727       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.4µs"
	I0612 15:03:39.747039   13752 command_runner.go:130] ! I0612 21:39:57.870560       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="68.5µs"
	I0612 15:03:39.747039   13752 command_runner.go:130] ! I0612 21:39:58.874445       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.448319ms"
	I0612 15:03:39.747124   13752 command_runner.go:130] ! I0612 21:39:58.875168       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="103.901µs"
	I0612 15:03:39.747124   13752 command_runner.go:130] ! I0612 21:39:59.529553       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0612 15:03:39.747204   13752 command_runner.go:130] ! I0612 21:42:39.169243       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m02\" does not exist"
	I0612 15:03:39.747204   13752 command_runner.go:130] ! I0612 21:42:39.188142       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025000-m02" podCIDRs=["10.244.1.0/24"]
	I0612 15:03:39.747204   13752 command_runner.go:130] ! I0612 21:42:39.563565       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000-m02"
	I0612 15:03:39.747278   13752 command_runner.go:130] ! I0612 21:42:58.063730       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:39.747278   13752 command_runner.go:130] ! I0612 21:43:24.138579       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.052538ms"
	I0612 15:03:39.747278   13752 command_runner.go:130] ! I0612 21:43:24.156190       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.434267ms"
	I0612 15:03:39.747278   13752 command_runner.go:130] ! I0612 21:43:24.156677       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.099µs"
	I0612 15:03:39.747364   13752 command_runner.go:130] ! I0612 21:43:24.183391       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.299µs"
	I0612 15:03:39.747364   13752 command_runner.go:130] ! I0612 21:43:26.908415       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.051448ms"
	I0612 15:03:39.747364   13752 command_runner.go:130] ! I0612 21:43:26.908853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34µs"
	I0612 15:03:39.747364   13752 command_runner.go:130] ! I0612 21:43:27.296932       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.474956ms"
	I0612 15:03:39.747440   13752 command_runner.go:130] ! I0612 21:43:27.304566       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.488944ms"
	I0612 15:03:39.747440   13752 command_runner.go:130] ! I0612 21:47:16.485552       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:39.747440   13752 command_runner.go:130] ! I0612 21:47:16.486568       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m03\" does not exist"
	I0612 15:03:39.747521   13752 command_runner.go:130] ! I0612 21:47:16.503987       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025000-m03" podCIDRs=["10.244.2.0/24"]
	I0612 15:03:39.747521   13752 command_runner.go:130] ! I0612 21:47:19.629018       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000-m03"
	I0612 15:03:39.747521   13752 command_runner.go:130] ! I0612 21:47:35.032365       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:39.747635   13752 command_runner.go:130] ! I0612 21:55:19.767980       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:39.747676   13752 command_runner.go:130] ! I0612 21:57:52.374240       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:39.747676   13752 command_runner.go:130] ! I0612 21:57:58.774442       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m03\" does not exist"
	I0612 15:03:39.747768   13752 command_runner.go:130] ! I0612 21:57:58.774588       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:39.747768   13752 command_runner.go:130] ! I0612 21:57:58.809041       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025000-m03" podCIDRs=["10.244.3.0/24"]
	I0612 15:03:39.747807   13752 command_runner.go:130] ! I0612 21:58:06.126407       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:39.747807   13752 command_runner.go:130] ! I0612 21:59:45.222238       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:39.759960   13752 logs.go:123] Gathering logs for Docker ...
	I0612 15:03:39.759960   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0612 15:03:39.793299   13752 command_runner.go:130] > Jun 12 22:00:59 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0612 15:03:39.793299   13752 command_runner.go:130] > Jun 12 22:00:59 minikube cri-dockerd[222]: time="2024-06-12T22:00:59Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0612 15:03:39.793410   13752 command_runner.go:130] > Jun 12 22:00:59 minikube cri-dockerd[222]: time="2024-06-12T22:00:59Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0612 15:03:39.793410   13752 command_runner.go:130] > Jun 12 22:00:59 minikube cri-dockerd[222]: time="2024-06-12T22:00:59Z" level=info msg="Start docker client with request timeout 0s"
	I0612 15:03:39.793410   13752 command_runner.go:130] > Jun 12 22:00:59 minikube cri-dockerd[222]: time="2024-06-12T22:00:59Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0612 15:03:39.793410   13752 command_runner.go:130] > Jun 12 22:01:00 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0612 15:03:39.793410   13752 command_runner.go:130] > Jun 12 22:01:00 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0612 15:03:39.793536   13752 command_runner.go:130] > Jun 12 22:01:00 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0612 15:03:39.793536   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0612 15:03:39.793536   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0612 15:03:39.793612   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0612 15:03:39.793612   13752 command_runner.go:130] > Jun 12 22:01:02 minikube cri-dockerd[400]: time="2024-06-12T22:01:02Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0612 15:03:39.793612   13752 command_runner.go:130] > Jun 12 22:01:02 minikube cri-dockerd[400]: time="2024-06-12T22:01:02Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0612 15:03:39.793612   13752 command_runner.go:130] > Jun 12 22:01:02 minikube cri-dockerd[400]: time="2024-06-12T22:01:02Z" level=info msg="Start docker client with request timeout 0s"
	I0612 15:03:39.793737   13752 command_runner.go:130] > Jun 12 22:01:02 minikube cri-dockerd[400]: time="2024-06-12T22:01:02Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0612 15:03:39.793737   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0612 15:03:39.793737   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0612 15:03:39.793737   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0612 15:03:39.793842   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0612 15:03:39.793842   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0612 15:03:39.793842   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0612 15:03:39.793842   13752 command_runner.go:130] > Jun 12 22:01:04 minikube cri-dockerd[420]: time="2024-06-12T22:01:04Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0612 15:03:39.793842   13752 command_runner.go:130] > Jun 12 22:01:04 minikube cri-dockerd[420]: time="2024-06-12T22:01:04Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0612 15:03:39.793948   13752 command_runner.go:130] > Jun 12 22:01:04 minikube cri-dockerd[420]: time="2024-06-12T22:01:04Z" level=info msg="Start docker client with request timeout 0s"
	I0612 15:03:39.793948   13752 command_runner.go:130] > Jun 12 22:01:04 minikube cri-dockerd[420]: time="2024-06-12T22:01:04Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0612 15:03:39.793948   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0612 15:03:39.793948   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0612 15:03:39.793948   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0612 15:03:39.794082   13752 command_runner.go:130] > Jun 12 22:01:07 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0612 15:03:39.794082   13752 command_runner.go:130] > Jun 12 22:01:07 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0612 15:03:39.794082   13752 command_runner.go:130] > Jun 12 22:01:07 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0612 15:03:39.794082   13752 command_runner.go:130] > Jun 12 22:01:07 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0612 15:03:39.794082   13752 command_runner.go:130] > Jun 12 22:01:07 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0612 15:03:39.794167   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 systemd[1]: Starting Docker Application Container Engine...
	I0612 15:03:39.794167   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[647]: time="2024-06-12T22:01:50.903212301Z" level=info msg="Starting up"
	I0612 15:03:39.794167   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[647]: time="2024-06-12T22:01:50.904075211Z" level=info msg="containerd not running, starting managed containerd"
	I0612 15:03:39.794167   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[647]: time="2024-06-12T22:01:50.905013523Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=653
	I0612 15:03:39.794167   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.936715611Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0612 15:03:39.794246   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.960715605Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0612 15:03:39.794246   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.960765806Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0612 15:03:39.794246   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.960836707Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0612 15:03:39.794324   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.961045509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:39.794358   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.961654317Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:39.794394   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.961681417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:39.794394   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.961916220Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:39.794514   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.962126123Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:39.794567   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.962152723Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0612 15:03:39.794590   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.962167223Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:39.794590   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.962695730Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:39.794590   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.963400938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:39.794658   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.966083771Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:39.794658   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.966199872Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:39.794742   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.966330074Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:39.794742   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.966461076Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0612 15:03:39.794742   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.967039883Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0612 15:03:39.794822   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.967257385Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0612 15:03:39.794822   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.967282486Z" level=info msg="metadata content store policy set" policy=shared
	I0612 15:03:39.794822   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974400773Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0612 15:03:39.794822   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974631276Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0612 15:03:39.794822   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974732277Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0612 15:03:39.794906   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974755077Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0612 15:03:39.794906   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974771478Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0612 15:03:39.794906   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974844078Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0612 15:03:39.794984   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975137982Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0612 15:03:39.794984   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975475986Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0612 15:03:39.794984   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975634588Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0612 15:03:39.794984   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975657088Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0612 15:03:39.795191   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975672789Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0612 15:03:39.795191   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975691989Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0612 15:03:39.795191   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975721989Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0612 15:03:39.795191   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975744389Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0612 15:03:39.795191   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975762790Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0612 15:03:39.795303   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975776490Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0612 15:03:39.795303   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975789190Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0612 15:03:39.795303   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975800790Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0612 15:03:39.795303   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975819990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.795393   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975835091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.795393   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975847091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.795393   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975859491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.795393   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975870791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.795479   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975883291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.795479   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975894491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.795479   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975906891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.795479   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975920192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.795562   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975935492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.795562   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975947192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.795562   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975958792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.795649   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975971092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.795649   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975989492Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0612 15:03:39.795649   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976009893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.795649   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976030193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.795745   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976044093Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0612 15:03:39.795745   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976167595Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0612 15:03:39.795745   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976210595Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0612 15:03:39.795845   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976227295Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0612 15:03:39.795845   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976239996Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0612 15:03:39.795936   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976250696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.795936   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976263096Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0612 15:03:39.795936   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976273096Z" level=info msg="NRI interface is disabled by configuration."
	I0612 15:03:39.796015   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976489199Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0612 15:03:39.796015   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976766002Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0612 15:03:39.796015   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976819403Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0612 15:03:39.796015   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976839003Z" level=info msg="containerd successfully booted in 0.042772s"
	I0612 15:03:39.796110   13752 command_runner.go:130] > Jun 12 22:01:51 multinode-025000 dockerd[647]: time="2024-06-12T22:01:51.958896661Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0612 15:03:39.796110   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.175284022Z" level=info msg="Loading containers: start."
	I0612 15:03:39.796110   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.600253538Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0612 15:03:39.796110   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.679773678Z" level=info msg="Loading containers: done."
	I0612 15:03:39.796187   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.711890198Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0612 15:03:39.796187   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.712661408Z" level=info msg="Daemon has completed initialization"
	I0612 15:03:39.796187   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.774658419Z" level=info msg="API listen on /var/run/docker.sock"
	I0612 15:03:39.796187   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.774960723Z" level=info msg="API listen on [::]:2376"
	I0612 15:03:39.796264   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 systemd[1]: Started Docker Application Container Engine.
	I0612 15:03:39.796264   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 dockerd[647]: time="2024-06-12T22:02:17.292813222Z" level=info msg="Processing signal 'terminated'"
	I0612 15:03:39.796264   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 systemd[1]: Stopping Docker Application Container Engine...
	I0612 15:03:39.796264   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 dockerd[647]: time="2024-06-12T22:02:17.294859626Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0612 15:03:39.796264   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 dockerd[647]: time="2024-06-12T22:02:17.295213927Z" level=info msg="Daemon shutdown complete"
	I0612 15:03:39.796341   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 dockerd[647]: time="2024-06-12T22:02:17.295258527Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0612 15:03:39.796341   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 dockerd[647]: time="2024-06-12T22:02:17.295281927Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0612 15:03:39.796341   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 systemd[1]: docker.service: Deactivated successfully.
	I0612 15:03:39.796341   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 systemd[1]: Stopped Docker Application Container Engine.
	I0612 15:03:39.796341   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 systemd[1]: Starting Docker Application Container Engine...
	I0612 15:03:39.796418   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:18.376333019Z" level=info msg="Starting up"
	I0612 15:03:39.796418   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:18.377520222Z" level=info msg="containerd not running, starting managed containerd"
	I0612 15:03:39.796418   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:18.378639425Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1050
	I0612 15:03:39.796418   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.412854304Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0612 15:03:39.796418   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437361860Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0612 15:03:39.796418   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437471260Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0612 15:03:39.796556   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437558660Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0612 15:03:39.796556   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437600861Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:39.796556   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437638361Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:39.796643   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437674061Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:39.796643   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437957561Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:39.796709   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.438006462Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:39.796709   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.438028962Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0612 15:03:39.796787   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.438041362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:39.796855   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.438072362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:39.796855   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.438209862Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:39.796902   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441166869Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:39.796902   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441307169Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:39.796994   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441467569Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:39.796994   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441599370Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0612 15:03:39.797048   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441629870Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0612 15:03:39.797087   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441648170Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0612 15:03:39.797131   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441660470Z" level=info msg="metadata content store policy set" policy=shared
	I0612 15:03:39.797131   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442075271Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0612 15:03:39.797131   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442166571Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0612 15:03:39.797131   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442187871Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0612 15:03:39.797198   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442201971Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0612 15:03:39.797198   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442217371Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0612 15:03:39.797198   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442266071Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0612 15:03:39.797276   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442474372Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0612 15:03:39.797276   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442551072Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0612 15:03:39.797332   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442567272Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0612 15:03:39.797332   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442579372Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0612 15:03:39.797392   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442592672Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0612 15:03:39.797392   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442605072Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0612 15:03:39.797392   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442627672Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0612 15:03:39.797445   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442645772Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0612 15:03:39.797445   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442660172Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0612 15:03:39.797445   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442671872Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0612 15:03:39.797523   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442683572Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0612 15:03:39.797523   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442694372Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0612 15:03:39.797581   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442714572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.797581   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442727972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.797581   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442739972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.797645   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442754772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.797645   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442766572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.797703   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442778073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.797703   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442788873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.797703   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442800473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.797768   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442812673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.797768   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442826373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.797768   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442837973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.797768   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442849073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.797833   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442860373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.797833   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442875173Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0612 15:03:39.797833   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442974073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.797912   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442994973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.797912   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443006773Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0612 15:03:39.797963   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443066573Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0612 15:03:39.798003   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443088973Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0612 15:03:39.798003   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443100473Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0612 15:03:39.798040   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443113173Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0612 15:03:39.798040   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443144073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0612 15:03:39.798184   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443156573Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0612 15:03:39.798184   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443166273Z" level=info msg="NRI interface is disabled by configuration."
	I0612 15:03:39.798184   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443418874Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0612 15:03:39.798232   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443494174Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0612 15:03:39.798270   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443534574Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0612 15:03:39.798270   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443571274Z" level=info msg="containerd successfully booted in 0.033238s"
	I0612 15:03:39.798310   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.419757425Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0612 15:03:39.798310   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.449018892Z" level=info msg="Loading containers: start."
	I0612 15:03:39.798348   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.739331061Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0612 15:03:39.798387   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.815989438Z" level=info msg="Loading containers: done."
	I0612 15:03:39.798387   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.842536299Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.842674899Z" level=info msg="Daemon has completed initialization"
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.885012997Z" level=info msg="API listen on /var/run/docker.sock"
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.885608398Z" level=info msg="API listen on [::]:2376"
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 systemd[1]: Started Docker Application Container Engine.
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Start docker client with request timeout 0s"
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Loaded network plugin cni"
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Start cri-dockerd grpc backend"
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:25Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-vgcxw_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"894c58e9fe752e78b8e86cbbaabc1b6cc78ebcce37e4fc0bf1d838420f80a94d\""
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:25Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-45qqd_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"84a9b747663ca262bb35bb462ba83da0c104aee08928bd92a44297ee225d4c27\""
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.449365529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.449468129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.449499429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.449616229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.464315863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.464397563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.464444563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.464765264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.578440826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.581064832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.798445   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.582145135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.799035   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.582532135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.799035   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.617373216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.799109   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.617486816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.799109   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.617504016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.799109   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.617593816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.799224   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/da184577f0371664d0a472b38bbfcfd866178308bf69eaabdaefb47d30a7057a/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a228f6c30fdf44f53a40ac14a2a8b995155f743739957ac413c700924fc873ed/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/20cbfb3fb853177b89366d165b6a1f67628b2c429266b77034ee6d1ca68b7bac/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/76517193a960ab9d78db3449c72d4b8285bbf321f947b06f8088487d36423fd7/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.094370315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.094456516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.094499716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.094865116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.162934973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.163009674Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.163029074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.163177074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.167659984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.170028290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.170289390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.171053192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.233482736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.233861237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.234167138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.234578639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:31Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.197280978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.198144480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.198158780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.799256   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.198341381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.799839   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.213822116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.799839   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.213977717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.799910   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.214060117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.799910   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.214298317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.799910   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.234135963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.800008   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.234182263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.234192563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.234264863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/435c56b0fbbbb46e4b392ac6467c2054ce16271a6b3dad2d53f747f839b4b3cd/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5287b61207e62a3ec16408b08af503462a8bed945d441422fd0b733e752d6217/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.564394224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.564548725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.564602325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.565056126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.630517377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.630663477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.630850678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.635052387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a20975d81b350d77bb2d9d69d861d19ddbcbab33211643f61e2aaa0d6dc46a9d/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.972834166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.973545267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.974028469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.974235669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 dockerd[1044]: time="2024-06-12T22:03:03.121297409Z" level=info msg="ignoring event" container=3546a5c00321078fed32a806a318f4e56e89801ea54ea9463adf37f82327b38a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:03.122616734Z" level=info msg="shim disconnected" id=3546a5c00321078fed32a806a318f4e56e89801ea54ea9463adf37f82327b38a namespace=moby
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:03.123474651Z" level=warning msg="cleaning up after shim disconnected" id=3546a5c00321078fed32a806a318f4e56e89801ea54ea9463adf37f82327b38a namespace=moby
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:03.123682355Z" level=info msg="cleaning up dead shim" namespace=moby
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:03:13 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:13.819634342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.800043   13752 command_runner.go:130] > Jun 12 22:03:13 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:13.819751243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.800625   13752 command_runner.go:130] > Jun 12 22:03:13 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:13.819788644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.800625   13752 command_runner.go:130] > Jun 12 22:03:13 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:13.820654753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.800690   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.004015440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.800690   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.004176540Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.800690   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.004193540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.800804   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.005298945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.800804   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.006561551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.800838   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.006633551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.800882   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.006681251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.800913   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.006796752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.800913   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:03:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/986567ef57643aec05ae5353795c364b380cb0f13c2ba98b1c4e04897e7b2e46/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:39.800913   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:03:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2434f89aefe0079002e81e136580c67ef1dca28bfa3b4c1e950241aea9663d4a/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0612 15:03:39.800913   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.542434894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.800913   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.542705495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.800913   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.542742195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.800913   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.543238997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.800913   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.606926167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:39.800913   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.606994167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:39.800913   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.607017268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.800913   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.607410069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:39.822326   13752 logs.go:123] Gathering logs for container status ...
	I0612 15:03:39.822326   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 15:03:39.885944   13752 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0612 15:03:39.886241   13752 command_runner.go:130] > f2a949d407287       8c811b4aec35f                                                                                         3 seconds ago        Running             busybox                   1                   2434f89aefe00       busybox-fc5497c4f-45qqd
	I0612 15:03:39.886368   13752 command_runner.go:130] > 26e5daf354e36       cbb01a7bd410d                                                                                         3 seconds ago        Running             coredns                   1                   986567ef57643       coredns-7db6d8ff4d-vgcxw
	I0612 15:03:39.886368   13752 command_runner.go:130] > 448e057077ddc       6e38f40d628db                                                                                         26 seconds ago       Running             storage-provisioner       2                   5287b61207e62       storage-provisioner
	I0612 15:03:39.886368   13752 command_runner.go:130] > cccfd1e9fef5e       ac1c61439df46                                                                                         About a minute ago   Running             kindnet-cni               1                   a20975d81b350       kindnet-bqlg8
	I0612 15:03:39.886368   13752 command_runner.go:130] > 3546a5c003210       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   5287b61207e62       storage-provisioner
	I0612 15:03:39.886368   13752 command_runner.go:130] > 227a905829b07       747097150317f                                                                                         About a minute ago   Running             kube-proxy                1                   435c56b0fbbbb       kube-proxy-47lr8
	I0612 15:03:39.886368   13752 command_runner.go:130] > 6b61f5f6483d5       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   76517193a960a       etcd-multinode-025000
	I0612 15:03:39.886368   13752 command_runner.go:130] > bbe2d2e51b5f3       91be940803172                                                                                         About a minute ago   Running             kube-apiserver            0                   20cbfb3fb8531       kube-apiserver-multinode-025000
	I0612 15:03:39.886368   13752 command_runner.go:130] > 7acc8ff0a9317       25a1387cdab82                                                                                         About a minute ago   Running             kube-controller-manager   1                   a228f6c30fdf4       kube-controller-manager-multinode-025000
	I0612 15:03:39.886368   13752 command_runner.go:130] > 755750ecd1e39       a52dc94f0a912                                                                                         About a minute ago   Running             kube-scheduler            1                   da184577f0371       kube-scheduler-multinode-025000
	I0612 15:03:39.886368   13752 command_runner.go:130] > bfc0382d49a48       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   84a9b747663ca       busybox-fc5497c4f-45qqd
	I0612 15:03:39.886368   13752 command_runner.go:130] > e83cf4eef49e4       cbb01a7bd410d                                                                                         23 minutes ago       Exited              coredns                   0                   894c58e9fe752       coredns-7db6d8ff4d-vgcxw
	I0612 15:03:39.886368   13752 command_runner.go:130] > 4d60d82f6bc5d       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              23 minutes ago       Exited              kindnet-cni               0                   92f2d5f19e95e       kindnet-bqlg8
	I0612 15:03:39.886368   13752 command_runner.go:130] > c4842faba751e       747097150317f                                                                                         23 minutes ago       Exited              kube-proxy                0                   fad98f611536b       kube-proxy-47lr8
	I0612 15:03:39.886368   13752 command_runner.go:130] > 6b021c195669e       a52dc94f0a912                                                                                         24 minutes ago       Exited              kube-scheduler            0                   d9933fdc9ca72       kube-scheduler-multinode-025000
	I0612 15:03:39.887099   13752 command_runner.go:130] > 685d167da53c9       25a1387cdab82                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   bb4351fab502e       kube-controller-manager-multinode-025000
	I0612 15:03:39.889651   13752 logs.go:123] Gathering logs for dmesg ...
	I0612 15:03:39.889651   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 15:03:39.910819   13752 command_runner.go:130] > [Jun12 22:00] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0612 15:03:39.910819   13752 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0612 15:03:39.910819   13752 command_runner.go:130] > [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0612 15:03:39.910819   13752 command_runner.go:130] > [  +0.131000] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0612 15:03:39.910917   13752 command_runner.go:130] > [  +0.025099] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0612 15:03:39.910917   13752 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +0.064850] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +0.023448] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0612 15:03:39.911062   13752 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +5.508165] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +1.342262] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +1.269809] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +7.259362] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0612 15:03:39.911062   13752 command_runner.go:130] > [Jun12 22:01] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +0.155290] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	I0612 15:03:39.911062   13752 command_runner.go:130] > [Jun12 22:02] systemd-fstab-generator[971]: Ignoring "noauto" option for root device
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +0.095843] kauditd_printk_skb: 73 callbacks suppressed
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +0.507476] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +0.171390] systemd-fstab-generator[1022]: Ignoring "noauto" option for root device
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +0.210222] systemd-fstab-generator[1036]: Ignoring "noauto" option for root device
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +2.904531] systemd-fstab-generator[1224]: Ignoring "noauto" option for root device
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +0.189304] systemd-fstab-generator[1237]: Ignoring "noauto" option for root device
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +0.162041] systemd-fstab-generator[1248]: Ignoring "noauto" option for root device
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +0.261611] systemd-fstab-generator[1263]: Ignoring "noauto" option for root device
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +0.815328] systemd-fstab-generator[1374]: Ignoring "noauto" option for root device
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +0.096217] kauditd_printk_skb: 205 callbacks suppressed
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +3.646175] systemd-fstab-generator[1510]: Ignoring "noauto" option for root device
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +1.441935] kauditd_printk_skb: 54 callbacks suppressed
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +5.624550] kauditd_printk_skb: 20 callbacks suppressed
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +3.644538] systemd-fstab-generator[2322]: Ignoring "noauto" option for root device
	I0612 15:03:39.911062   13752 command_runner.go:130] > [  +8.250122] kauditd_printk_skb: 70 callbacks suppressed
	I0612 15:03:39.913122   13752 logs.go:123] Gathering logs for coredns [e83cf4eef49e] ...
	I0612 15:03:39.913122   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83cf4eef49e"
	I0612 15:03:39.941056   13752 command_runner.go:130] > .:53
	I0612 15:03:39.942203   13752 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 9f7dc1bade6b5769fb289c890c4bc60268e74645c2ad6eb7d326d3f775fd92cb51f1ac39274894772e6760c31275de0003978af82f0f289ef8d45827e8140e48
	I0612 15:03:39.942203   13752 command_runner.go:130] > CoreDNS-1.11.1
	I0612 15:03:39.942203   13752 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 127.0.0.1:53490 - 39118 "HINFO IN 4677201826540465335.2322207397622737457. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.048277073s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.0.3:49256 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000267302s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.0.3:54623 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.08558s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.0.3:51804 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.048771085s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.0.3:53027 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.100151983s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.1.2:34534 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001199s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.1.2:44985 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000141701s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.1.2:54544 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.0000543s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.1.2:55517 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000123601s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.0.3:42995 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099501s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.0.3:51839 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.135718274s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.0.3:52123 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000304602s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.0.3:36740 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000274801s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.0.3:48333 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.003287018s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.0.3:55754 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000962s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.0.3:51695 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000224102s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.0.3:49605 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096301s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.1.2:37746 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000283001s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.1.2:54995 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000106501s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.1.2:49201 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077401s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.1.2:60577 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077201s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.1.2:36057 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000107301s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.1.2:43898 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.1.2:49177 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091201s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.1.2:45207 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000584s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.0.3:36676 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151001s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.0.3:60305 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000305802s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.0.3:37468 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000209201s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.0.3:34743 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125201s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.1.2:45035 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000240801s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.1.2:42306 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000309601s
	I0612 15:03:39.942301   13752 command_runner.go:130] > [INFO] 10.244.1.2:36509 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000152901s
	I0612 15:03:39.942829   13752 command_runner.go:130] > [INFO] 10.244.1.2:55614 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000545s
	I0612 15:03:39.943001   13752 command_runner.go:130] > [INFO] 10.244.0.3:39195 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130301s
	I0612 15:03:39.943227   13752 command_runner.go:130] > [INFO] 10.244.0.3:34618 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000272902s
	I0612 15:03:39.943290   13752 command_runner.go:130] > [INFO] 10.244.0.3:44444 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000177201s
	I0612 15:03:39.943290   13752 command_runner.go:130] > [INFO] 10.244.0.3:35691 - 5 "PTR IN 1.192.23.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001307s
	I0612 15:03:39.943290   13752 command_runner.go:130] > [INFO] 10.244.1.2:51174 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110501s
	I0612 15:03:39.943290   13752 command_runner.go:130] > [INFO] 10.244.1.2:41925 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000207401s
	I0612 15:03:39.943290   13752 command_runner.go:130] > [INFO] 10.244.1.2:44306 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000736s
	I0612 15:03:39.943290   13752 command_runner.go:130] > [INFO] 10.244.1.2:46158 - 5 "PTR IN 1.192.23.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000547s
	I0612 15:03:39.943290   13752 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0612 15:03:39.943290   13752 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0612 15:03:39.946705   13752 logs.go:123] Gathering logs for kube-scheduler [6b021c195669] ...
	I0612 15:03:39.947327   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b021c195669"
	I0612 15:03:39.977128   13752 command_runner.go:130] ! I0612 21:39:26.474423       1 serving.go:380] Generated self-signed cert in-memory
	I0612 15:03:39.977299   13752 command_runner.go:130] ! W0612 21:39:28.263287       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0612 15:03:39.977299   13752 command_runner.go:130] ! W0612 21:39:28.263543       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:39.977299   13752 command_runner.go:130] ! W0612 21:39:28.263706       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0612 15:03:39.977299   13752 command_runner.go:130] ! W0612 21:39:28.263849       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0612 15:03:39.977299   13752 command_runner.go:130] ! I0612 21:39:28.303051       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0612 15:03:39.977299   13752 command_runner.go:130] ! I0612 21:39:28.305840       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:39.977426   13752 command_runner.go:130] ! I0612 21:39:28.310682       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0612 15:03:39.977426   13752 command_runner.go:130] ! I0612 21:39:28.312812       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0612 15:03:39.977426   13752 command_runner.go:130] ! I0612 21:39:28.313421       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0612 15:03:39.977426   13752 command_runner.go:130] ! I0612 21:39:28.313594       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 15:03:39.977426   13752 command_runner.go:130] ! W0612 21:39:28.336905       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:39.977426   13752 command_runner.go:130] ! E0612 21:39:28.337826       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:39.977590   13752 command_runner.go:130] ! W0612 21:39:28.338227       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0612 15:03:39.977590   13752 command_runner.go:130] ! E0612 21:39:28.338391       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0612 15:03:39.977703   13752 command_runner.go:130] ! W0612 21:39:28.338652       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0612 15:03:39.977703   13752 command_runner.go:130] ! E0612 21:39:28.338896       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0612 15:03:39.977794   13752 command_runner.go:130] ! W0612 21:39:28.339195       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0612 15:03:39.977794   13752 command_runner.go:130] ! E0612 21:39:28.339406       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0612 15:03:39.977794   13752 command_runner.go:130] ! W0612 21:39:28.339694       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0612 15:03:39.977794   13752 command_runner.go:130] ! E0612 21:39:28.339892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0612 15:03:39.977910   13752 command_runner.go:130] ! W0612 21:39:28.340188       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0612 15:03:39.977910   13752 command_runner.go:130] ! E0612 21:39:28.340362       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0612 15:03:39.977910   13752 command_runner.go:130] ! W0612 21:39:28.340697       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:39.977910   13752 command_runner.go:130] ! E0612 21:39:28.341129       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:39.978061   13752 command_runner.go:130] ! W0612 21:39:28.341447       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:39.978115   13752 command_runner.go:130] ! E0612 21:39:28.341664       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:39.978115   13752 command_runner.go:130] ! W0612 21:39:28.341989       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0612 15:03:39.978115   13752 command_runner.go:130] ! E0612 21:39:28.342229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0612 15:03:39.978115   13752 command_runner.go:130] ! W0612 21:39:28.342540       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:39.978115   13752 command_runner.go:130] ! E0612 21:39:28.344839       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:39.978115   13752 command_runner.go:130] ! W0612 21:39:28.345316       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0612 15:03:39.978115   13752 command_runner.go:130] ! E0612 21:39:28.347872       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0612 15:03:39.978115   13752 command_runner.go:130] ! W0612 21:39:28.345596       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:39.978115   13752 command_runner.go:130] ! W0612 21:39:28.345651       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0612 15:03:39.978115   13752 command_runner.go:130] ! W0612 21:39:28.345691       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0612 15:03:39.978115   13752 command_runner.go:130] ! W0612 21:39:28.345823       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0612 15:03:39.978115   13752 command_runner.go:130] ! E0612 21:39:28.348490       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:39.978115   13752 command_runner.go:130] ! E0612 21:39:28.348742       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0612 15:03:39.978115   13752 command_runner.go:130] ! E0612 21:39:28.349066       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0612 15:03:39.978115   13752 command_runner.go:130] ! E0612 21:39:28.349147       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0612 15:03:39.978646   13752 command_runner.go:130] ! W0612 21:39:29.192073       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0612 15:03:39.978646   13752 command_runner.go:130] ! E0612 21:39:29.192126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0612 15:03:39.978646   13752 command_runner.go:130] ! W0612 21:39:29.249000       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:39.978646   13752 command_runner.go:130] ! E0612 21:39:29.249248       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:39.978646   13752 command_runner.go:130] ! W0612 21:39:29.268880       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0612 15:03:39.978824   13752 command_runner.go:130] ! E0612 21:39:29.268972       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0612 15:03:39.978824   13752 command_runner.go:130] ! W0612 21:39:29.271696       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:39.978824   13752 command_runner.go:130] ! E0612 21:39:29.271839       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:39.978928   13752 command_runner.go:130] ! W0612 21:39:29.275489       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0612 15:03:39.978928   13752 command_runner.go:130] ! E0612 21:39:29.275551       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0612 15:03:39.979010   13752 command_runner.go:130] ! W0612 21:39:29.296739       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:39.979010   13752 command_runner.go:130] ! E0612 21:39:29.297145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:39.979010   13752 command_runner.go:130] ! W0612 21:39:29.433593       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0612 15:03:39.979110   13752 command_runner.go:130] ! E0612 21:39:29.433887       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0612 15:03:39.979110   13752 command_runner.go:130] ! W0612 21:39:29.471880       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0612 15:03:39.979110   13752 command_runner.go:130] ! E0612 21:39:29.471994       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0612 15:03:39.979196   13752 command_runner.go:130] ! W0612 21:39:29.482669       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:39.979196   13752 command_runner.go:130] ! E0612 21:39:29.483008       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:39.979275   13752 command_runner.go:130] ! W0612 21:39:29.569402       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0612 15:03:39.979275   13752 command_runner.go:130] ! E0612 21:39:29.571433       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0612 15:03:39.979275   13752 command_runner.go:130] ! W0612 21:39:29.677906       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0612 15:03:39.979364   13752 command_runner.go:130] ! E0612 21:39:29.677950       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0612 15:03:39.979364   13752 command_runner.go:130] ! W0612 21:39:29.687951       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0612 15:03:39.979462   13752 command_runner.go:130] ! E0612 21:39:29.688054       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0612 15:03:39.979462   13752 command_runner.go:130] ! W0612 21:39:29.780288       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0612 15:03:39.979539   13752 command_runner.go:130] ! E0612 21:39:29.780411       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0612 15:03:39.979539   13752 command_runner.go:130] ! W0612 21:39:29.832564       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0612 15:03:39.979539   13752 command_runner.go:130] ! E0612 21:39:29.832892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0612 15:03:39.979617   13752 command_runner.go:130] ! W0612 21:39:29.889591       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:39.979617   13752 command_runner.go:130] ! E0612 21:39:29.889868       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:39.979617   13752 command_runner.go:130] ! I0612 21:39:32.513980       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0612 15:03:39.979617   13752 command_runner.go:130] ! E0612 22:00:01.172050       1 run.go:74] "command failed" err="finished without leader elect"
	I0612 15:03:39.988944   13752 logs.go:123] Gathering logs for kindnet [cccfd1e9fef5] ...
	I0612 15:03:39.988944   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccfd1e9fef5"
	I0612 15:03:40.009560   13752 command_runner.go:130] ! I0612 22:02:33.621070       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0612 15:03:40.015182   13752 command_runner.go:130] ! I0612 22:02:33.621857       1 main.go:107] hostIP = 172.23.200.184
	I0612 15:03:40.015182   13752 command_runner.go:130] ! podIP = 172.23.200.184
	I0612 15:03:40.015182   13752 command_runner.go:130] ! I0612 22:02:33.622055       1 main.go:116] setting mtu 1500 for CNI 
	I0612 15:03:40.015182   13752 command_runner.go:130] ! I0612 22:02:33.622069       1 main.go:146] kindnetd IP family: "ipv4"
	I0612 15:03:40.015182   13752 command_runner.go:130] ! I0612 22:02:33.622082       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0612 15:03:40.015182   13752 command_runner.go:130] ! I0612 22:03:03.928722       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0612 15:03:40.015182   13752 command_runner.go:130] ! I0612 22:03:03.948068       1 main.go:223] Handling node with IPs: map[172.23.200.184:{}]
	I0612 15:03:40.015182   13752 command_runner.go:130] ! I0612 22:03:03.948207       1 main.go:227] handling current node
	I0612 15:03:40.015182   13752 command_runner.go:130] ! I0612 22:03:04.015006       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.015311   13752 command_runner.go:130] ! I0612 22:03:04.015280       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.015311   13752 command_runner.go:130] ! I0612 22:03:04.015617       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.23.196.105 Flags: [] Table: 0} 
	I0612 15:03:40.015311   13752 command_runner.go:130] ! I0612 22:03:04.015960       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:40.015311   13752 command_runner.go:130] ! I0612 22:03:04.015976       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:40.015311   13752 command_runner.go:130] ! I0612 22:03:04.016053       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.23.206.72 Flags: [] Table: 0} 
	I0612 15:03:40.015311   13752 command_runner.go:130] ! I0612 22:03:14.032118       1 main.go:223] Handling node with IPs: map[172.23.200.184:{}]
	I0612 15:03:40.015311   13752 command_runner.go:130] ! I0612 22:03:14.032228       1 main.go:227] handling current node
	I0612 15:03:40.015410   13752 command_runner.go:130] ! I0612 22:03:14.032243       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.015410   13752 command_runner.go:130] ! I0612 22:03:14.032255       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.015410   13752 command_runner.go:130] ! I0612 22:03:14.032739       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:40.015470   13752 command_runner.go:130] ! I0612 22:03:14.032836       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:40.015503   13752 command_runner.go:130] ! I0612 22:03:24.045393       1 main.go:223] Handling node with IPs: map[172.23.200.184:{}]
	I0612 15:03:40.015503   13752 command_runner.go:130] ! I0612 22:03:24.045492       1 main.go:227] handling current node
	I0612 15:03:40.015503   13752 command_runner.go:130] ! I0612 22:03:24.045504       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.015572   13752 command_runner.go:130] ! I0612 22:03:24.045510       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.015572   13752 command_runner.go:130] ! I0612 22:03:24.045926       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:40.015572   13752 command_runner.go:130] ! I0612 22:03:24.045941       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:40.015572   13752 command_runner.go:130] ! I0612 22:03:34.052186       1 main.go:223] Handling node with IPs: map[172.23.200.184:{}]
	I0612 15:03:40.015572   13752 command_runner.go:130] ! I0612 22:03:34.052288       1 main.go:227] handling current node
	I0612 15:03:40.015572   13752 command_runner.go:130] ! I0612 22:03:34.052302       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.015671   13752 command_runner.go:130] ! I0612 22:03:34.052309       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.015671   13752 command_runner.go:130] ! I0612 22:03:34.052423       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:40.015671   13752 command_runner.go:130] ! I0612 22:03:34.052452       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:40.017991   13752 logs.go:123] Gathering logs for describe nodes ...
	I0612 15:03:40.018063   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0612 15:03:40.257940   13752 command_runner.go:130] > Name:               multinode-025000
	I0612 15:03:40.257940   13752 command_runner.go:130] > Roles:              control-plane
	I0612 15:03:40.257940   13752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0612 15:03:40.257940   13752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0612 15:03:40.257940   13752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0612 15:03:40.257940   13752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-025000
	I0612 15:03:40.257940   13752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0612 15:03:40.257940   13752 command_runner.go:130] >                     minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97
	I0612 15:03:40.257940   13752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-025000
	I0612 15:03:40.257940   13752 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0612 15:03:40.257940   13752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_12T14_39_32_0700
	I0612 15:03:40.257940   13752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0612 15:03:40.257940   13752 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0612 15:03:40.257940   13752 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0612 15:03:40.257940   13752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0612 15:03:40.257940   13752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0612 15:03:40.257940   13752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0612 15:03:40.257940   13752 command_runner.go:130] > CreationTimestamp:  Wed, 12 Jun 2024 21:39:28 +0000
	I0612 15:03:40.257940   13752 command_runner.go:130] > Taints:             <none>
	I0612 15:03:40.257940   13752 command_runner.go:130] > Unschedulable:      false
	I0612 15:03:40.257940   13752 command_runner.go:130] > Lease:
	I0612 15:03:40.257940   13752 command_runner.go:130] >   HolderIdentity:  multinode-025000
	I0612 15:03:40.257940   13752 command_runner.go:130] >   AcquireTime:     <unset>
	I0612 15:03:40.257940   13752 command_runner.go:130] >   RenewTime:       Wed, 12 Jun 2024 22:03:32 +0000
	I0612 15:03:40.257940   13752 command_runner.go:130] > Conditions:
	I0612 15:03:40.257940   13752 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0612 15:03:40.257940   13752 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0612 15:03:40.257940   13752 command_runner.go:130] >   MemoryPressure   False   Wed, 12 Jun 2024 22:03:11 +0000   Wed, 12 Jun 2024 21:39:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0612 15:03:40.257940   13752 command_runner.go:130] >   DiskPressure     False   Wed, 12 Jun 2024 22:03:11 +0000   Wed, 12 Jun 2024 21:39:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0612 15:03:40.257940   13752 command_runner.go:130] >   PIDPressure      False   Wed, 12 Jun 2024 22:03:11 +0000   Wed, 12 Jun 2024 21:39:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0612 15:03:40.258608   13752 command_runner.go:130] >   Ready            True    Wed, 12 Jun 2024 22:03:11 +0000   Wed, 12 Jun 2024 22:03:11 +0000   KubeletReady                 kubelet is posting ready status
	I0612 15:03:40.258608   13752 command_runner.go:130] > Addresses:
	I0612 15:03:40.258608   13752 command_runner.go:130] >   InternalIP:  172.23.200.184
	I0612 15:03:40.258608   13752 command_runner.go:130] >   Hostname:    multinode-025000
	I0612 15:03:40.258608   13752 command_runner.go:130] > Capacity:
	I0612 15:03:40.258608   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:40.258608   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:40.258608   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:40.258608   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:40.258608   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:40.258608   13752 command_runner.go:130] > Allocatable:
	I0612 15:03:40.258608   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:40.258608   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:40.258608   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:40.258608   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:40.258608   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:40.258608   13752 command_runner.go:130] > System Info:
	I0612 15:03:40.258608   13752 command_runner.go:130] >   Machine ID:                 e65e28dfa5bf4f27a0123e4ae1007793
	I0612 15:03:40.258608   13752 command_runner.go:130] >   System UUID:                3e5a42d3-ea80-0c4d-ad18-4b76e4f3e22f
	I0612 15:03:40.258608   13752 command_runner.go:130] >   Boot ID:                    0efecf43-b070-4a8f-b542-4d1fd07306ad
	I0612 15:03:40.258608   13752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0612 15:03:40.258608   13752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0612 15:03:40.258608   13752 command_runner.go:130] >   Operating System:           linux
	I0612 15:03:40.258608   13752 command_runner.go:130] >   Architecture:               amd64
	I0612 15:03:40.258608   13752 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0612 15:03:40.258608   13752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0612 15:03:40.258608   13752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0612 15:03:40.258608   13752 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0612 15:03:40.258608   13752 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0612 15:03:40.258608   13752 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0612 15:03:40.258608   13752 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0612 15:03:40.258608   13752 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0612 15:03:40.258608   13752 command_runner.go:130] >   default                     busybox-fc5497c4f-45qqd                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0612 15:03:40.259191   13752 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-vgcxw                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     23m
	I0612 15:03:40.259191   13752 command_runner.go:130] >   kube-system                 etcd-multinode-025000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         69s
	I0612 15:03:40.259191   13752 command_runner.go:130] >   kube-system                 kindnet-bqlg8                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	I0612 15:03:40.259191   13752 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-025000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	I0612 15:03:40.259191   13752 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-025000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0612 15:03:40.259191   13752 command_runner.go:130] >   kube-system                 kube-proxy-47lr8                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0612 15:03:40.259191   13752 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-025000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0612 15:03:40.259191   13752 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0612 15:03:40.259401   13752 command_runner.go:130] > Allocated resources:
	I0612 15:03:40.259401   13752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0612 15:03:40.259401   13752 command_runner.go:130] >   Resource           Requests     Limits
	I0612 15:03:40.259401   13752 command_runner.go:130] >   --------           --------     ------
	I0612 15:03:40.259401   13752 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0612 15:03:40.259401   13752 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0612 15:03:40.259401   13752 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0612 15:03:40.259493   13752 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0612 15:03:40.259493   13752 command_runner.go:130] > Events:
	I0612 15:03:40.259493   13752 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0612 15:03:40.259493   13752 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0612 15:03:40.259493   13752 command_runner.go:130] >   Normal  Starting                 23m                kube-proxy       
	I0612 15:03:40.259493   13752 command_runner.go:130] >   Normal  Starting                 66s                kube-proxy       
	I0612 15:03:40.259579   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node multinode-025000 status is now: NodeHasSufficientMemory
	I0612 15:03:40.259579   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node multinode-025000 status is now: NodeHasNoDiskPressure
	I0612 15:03:40.259579   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node multinode-025000 status is now: NodeHasSufficientPID
	I0612 15:03:40.259579   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:40.259672   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-025000 status is now: NodeHasNoDiskPressure
	I0612 15:03:40.259672   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-025000 status is now: NodeHasSufficientMemory
	I0612 15:03:40.259672   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-025000 status is now: NodeHasSufficientPID
	I0612 15:03:40.259672   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:40.259672   13752 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0612 15:03:40.259758   13752 command_runner.go:130] >   Normal  RegisteredNode           23m                node-controller  Node multinode-025000 event: Registered Node multinode-025000 in Controller
	I0612 15:03:40.259758   13752 command_runner.go:130] >   Normal  NodeReady                23m                kubelet          Node multinode-025000 status is now: NodeReady
	I0612 15:03:40.259758   13752 command_runner.go:130] >   Normal  Starting                 75s                kubelet          Starting kubelet.
	I0612 15:03:40.259758   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  75s (x8 over 75s)  kubelet          Node multinode-025000 status is now: NodeHasSufficientMemory
	I0612 15:03:40.259758   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    75s (x8 over 75s)  kubelet          Node multinode-025000 status is now: NodeHasNoDiskPressure
	I0612 15:03:40.259850   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     75s (x7 over 75s)  kubelet          Node multinode-025000 status is now: NodeHasSufficientPID
	I0612 15:03:40.259850   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  75s                kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:40.259850   13752 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-025000 event: Registered Node multinode-025000 in Controller
	I0612 15:03:40.259850   13752 command_runner.go:130] > Name:               multinode-025000-m02
	I0612 15:03:40.259850   13752 command_runner.go:130] > Roles:              <none>
	I0612 15:03:40.259850   13752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0612 15:03:40.259850   13752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0612 15:03:40.259947   13752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0612 15:03:40.259947   13752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-025000-m02
	I0612 15:03:40.259947   13752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0612 15:03:40.259947   13752 command_runner.go:130] >                     minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97
	I0612 15:03:40.259947   13752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-025000
	I0612 15:03:40.259947   13752 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0612 15:03:40.260030   13752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_12T14_42_39_0700
	I0612 15:03:40.260030   13752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0612 15:03:40.260030   13752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0612 15:03:40.260030   13752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0612 15:03:40.260030   13752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0612 15:03:40.260113   13752 command_runner.go:130] > CreationTimestamp:  Wed, 12 Jun 2024 21:42:39 +0000
	I0612 15:03:40.260113   13752 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0612 15:03:40.260113   13752 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0612 15:03:40.260113   13752 command_runner.go:130] > Unschedulable:      false
	I0612 15:03:40.260113   13752 command_runner.go:130] > Lease:
	I0612 15:03:40.260113   13752 command_runner.go:130] >   HolderIdentity:  multinode-025000-m02
	I0612 15:03:40.260197   13752 command_runner.go:130] >   AcquireTime:     <unset>
	I0612 15:03:40.260197   13752 command_runner.go:130] >   RenewTime:       Wed, 12 Jun 2024 21:59:20 +0000
	I0612 15:03:40.260197   13752 command_runner.go:130] > Conditions:
	I0612 15:03:40.260197   13752 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0612 15:03:40.260197   13752 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0612 15:03:40.260197   13752 command_runner.go:130] >   MemoryPressure   Unknown   Wed, 12 Jun 2024 21:58:59 +0000   Wed, 12 Jun 2024 22:03:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:40.260197   13752 command_runner.go:130] >   DiskPressure     Unknown   Wed, 12 Jun 2024 21:58:59 +0000   Wed, 12 Jun 2024 22:03:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:40.260197   13752 command_runner.go:130] >   PIDPressure      Unknown   Wed, 12 Jun 2024 21:58:59 +0000   Wed, 12 Jun 2024 22:03:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:40.260349   13752 command_runner.go:130] >   Ready            Unknown   Wed, 12 Jun 2024 21:58:59 +0000   Wed, 12 Jun 2024 22:03:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:40.260349   13752 command_runner.go:130] > Addresses:
	I0612 15:03:40.260440   13752 command_runner.go:130] >   InternalIP:  172.23.196.105
	I0612 15:03:40.260440   13752 command_runner.go:130] >   Hostname:    multinode-025000-m02
	I0612 15:03:40.260440   13752 command_runner.go:130] > Capacity:
	I0612 15:03:40.260440   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:40.260440   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:40.260440   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:40.260440   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:40.260525   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:40.260525   13752 command_runner.go:130] > Allocatable:
	I0612 15:03:40.260525   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:40.260525   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:40.260525   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:40.260525   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:40.260525   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:40.260525   13752 command_runner.go:130] > System Info:
	I0612 15:03:40.260615   13752 command_runner.go:130] >   Machine ID:                 c11d7ff5518449f8bc8169a1fd7b0c4b
	I0612 15:03:40.260615   13752 command_runner.go:130] >   System UUID:                3b021c48-8479-f34c-83c2-77b944a77c5e
	I0612 15:03:40.260615   13752 command_runner.go:130] >   Boot ID:                    67e77c09-c6b2-4c01-b167-2481dd4a7a96
	I0612 15:03:40.260615   13752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0612 15:03:40.260615   13752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0612 15:03:40.260615   13752 command_runner.go:130] >   Operating System:           linux
	I0612 15:03:40.260615   13752 command_runner.go:130] >   Architecture:               amd64
	I0612 15:03:40.260701   13752 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0612 15:03:40.260701   13752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0612 15:03:40.260701   13752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0612 15:03:40.260701   13752 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0612 15:03:40.260701   13752 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0612 15:03:40.260701   13752 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0612 15:03:40.260701   13752 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0612 15:03:40.260793   13752 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0612 15:03:40.260793   13752 command_runner.go:130] >   default                     busybox-fc5497c4f-9bsls    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0612 15:03:40.260793   13752 command_runner.go:130] >   kube-system                 kindnet-v4cqk              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21m
	I0612 15:03:40.260793   13752 command_runner.go:130] >   kube-system                 kube-proxy-tdcdp           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0612 15:03:40.260793   13752 command_runner.go:130] > Allocated resources:
	I0612 15:03:40.260878   13752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0612 15:03:40.260878   13752 command_runner.go:130] >   Resource           Requests   Limits
	I0612 15:03:40.260878   13752 command_runner.go:130] >   --------           --------   ------
	I0612 15:03:40.260878   13752 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0612 15:03:40.260878   13752 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0612 15:03:40.260878   13752 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0612 15:03:40.260968   13752 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0612 15:03:40.260968   13752 command_runner.go:130] > Events:
	I0612 15:03:40.260968   13752 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0612 15:03:40.260968   13752 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0612 15:03:40.260968   13752 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0612 15:03:40.260968   13752 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-025000-m02 event: Registered Node multinode-025000-m02 in Controller
	I0612 15:03:40.260968   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-025000-m02 status is now: NodeHasSufficientMemory
	I0612 15:03:40.261057   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-025000-m02 status is now: NodeHasNoDiskPressure
	I0612 15:03:40.261057   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-025000-m02 status is now: NodeHasSufficientPID
	I0612 15:03:40.261057   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:40.261057   13752 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-025000-m02 status is now: NodeReady
	I0612 15:03:40.261163   13752 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-025000-m02 event: Registered Node multinode-025000-m02 in Controller
	I0612 15:03:40.261163   13752 command_runner.go:130] >   Normal  NodeNotReady             16s                node-controller  Node multinode-025000-m02 status is now: NodeNotReady
	I0612 15:03:40.261163   13752 command_runner.go:130] > Name:               multinode-025000-m03
	I0612 15:03:40.261163   13752 command_runner.go:130] > Roles:              <none>
	I0612 15:03:40.261163   13752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0612 15:03:40.261303   13752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0612 15:03:40.261303   13752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0612 15:03:40.261356   13752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-025000-m03
	I0612 15:03:40.261356   13752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0612 15:03:40.261386   13752 command_runner.go:130] >                     minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97
	I0612 15:03:40.261386   13752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-025000
	I0612 15:03:40.261386   13752 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0612 15:03:40.261422   13752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_12T14_57_59_0700
	I0612 15:03:40.261422   13752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0612 15:03:40.261453   13752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0612 15:03:40.261485   13752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0612 15:03:40.261485   13752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0612 15:03:40.261485   13752 command_runner.go:130] > CreationTimestamp:  Wed, 12 Jun 2024 21:57:58 +0000
	I0612 15:03:40.261520   13752 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0612 15:03:40.261520   13752 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0612 15:03:40.261551   13752 command_runner.go:130] > Unschedulable:      false
	I0612 15:03:40.261551   13752 command_runner.go:130] > Lease:
	I0612 15:03:40.261551   13752 command_runner.go:130] >   HolderIdentity:  multinode-025000-m03
	I0612 15:03:40.261551   13752 command_runner.go:130] >   AcquireTime:     <unset>
	I0612 15:03:40.261551   13752 command_runner.go:130] >   RenewTime:       Wed, 12 Jun 2024 21:59:00 +0000
	I0612 15:03:40.261551   13752 command_runner.go:130] > Conditions:
	I0612 15:03:40.261551   13752 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0612 15:03:40.261551   13752 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0612 15:03:40.261551   13752 command_runner.go:130] >   MemoryPressure   Unknown   Wed, 12 Jun 2024 21:58:06 +0000   Wed, 12 Jun 2024 21:59:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:40.261551   13752 command_runner.go:130] >   DiskPressure     Unknown   Wed, 12 Jun 2024 21:58:06 +0000   Wed, 12 Jun 2024 21:59:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:40.261551   13752 command_runner.go:130] >   PIDPressure      Unknown   Wed, 12 Jun 2024 21:58:06 +0000   Wed, 12 Jun 2024 21:59:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:40.261551   13752 command_runner.go:130] >   Ready            Unknown   Wed, 12 Jun 2024 21:58:06 +0000   Wed, 12 Jun 2024 21:59:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:40.261551   13752 command_runner.go:130] > Addresses:
	I0612 15:03:40.261551   13752 command_runner.go:130] >   InternalIP:  172.23.206.72
	I0612 15:03:40.261551   13752 command_runner.go:130] >   Hostname:    multinode-025000-m03
	I0612 15:03:40.261551   13752 command_runner.go:130] > Capacity:
	I0612 15:03:40.261551   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:40.261551   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:40.261551   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:40.261551   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:40.261551   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:40.261551   13752 command_runner.go:130] > Allocatable:
	I0612 15:03:40.261551   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:40.261551   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:40.261551   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:40.261551   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:40.261551   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:40.261551   13752 command_runner.go:130] > System Info:
	I0612 15:03:40.261551   13752 command_runner.go:130] >   Machine ID:                 b62d5e6740dc42d880d6595ac7dd57ae
	I0612 15:03:40.261551   13752 command_runner.go:130] >   System UUID:                31a13a9b-b7c6-6643-8352-fb322079216a
	I0612 15:03:40.261551   13752 command_runner.go:130] >   Boot ID:                    a21b9eff-2471-4589-9e35-5845aae64358
	I0612 15:03:40.261551   13752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0612 15:03:40.261551   13752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0612 15:03:40.261551   13752 command_runner.go:130] >   Operating System:           linux
	I0612 15:03:40.261551   13752 command_runner.go:130] >   Architecture:               amd64
	I0612 15:03:40.261551   13752 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0612 15:03:40.261551   13752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0612 15:03:40.261551   13752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0612 15:03:40.261551   13752 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0612 15:03:40.261551   13752 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0612 15:03:40.261551   13752 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0612 15:03:40.261551   13752 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0612 15:03:40.261551   13752 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0612 15:03:40.261551   13752 command_runner.go:130] >   kube-system                 kindnet-8252q       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	I0612 15:03:40.261551   13752 command_runner.go:130] >   kube-system                 kube-proxy-7jwdg    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	I0612 15:03:40.261551   13752 command_runner.go:130] > Allocated resources:
	I0612 15:03:40.261551   13752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0612 15:03:40.261551   13752 command_runner.go:130] >   Resource           Requests   Limits
	I0612 15:03:40.261551   13752 command_runner.go:130] >   --------           --------   ------
	I0612 15:03:40.262155   13752 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0612 15:03:40.262155   13752 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0612 15:03:40.262155   13752 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0612 15:03:40.262155   13752 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0612 15:03:40.262155   13752 command_runner.go:130] > Events:
	I0612 15:03:40.262155   13752 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0612 15:03:40.262155   13752 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0612 15:03:40.262155   13752 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0612 15:03:40.262155   13752 command_runner.go:130] >   Normal  Starting                 5m38s                  kube-proxy       
	I0612 15:03:40.262155   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:40.262337   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-025000-m03 status is now: NodeHasSufficientMemory
	I0612 15:03:40.262337   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-025000-m03 status is now: NodeHasNoDiskPressure
	I0612 15:03:40.262337   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-025000-m03 status is now: NodeHasSufficientPID
	I0612 15:03:40.262337   13752 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-025000-m03 status is now: NodeReady
	I0612 15:03:40.262337   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m42s (x2 over 5m42s)  kubelet          Node multinode-025000-m03 status is now: NodeHasSufficientMemory
	I0612 15:03:40.262427   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m42s (x2 over 5m42s)  kubelet          Node multinode-025000-m03 status is now: NodeHasNoDiskPressure
	I0612 15:03:40.262427   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m42s (x2 over 5m42s)  kubelet          Node multinode-025000-m03 status is now: NodeHasSufficientPID
	I0612 15:03:40.262427   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m42s                  kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:40.262427   13752 command_runner.go:130] >   Normal  RegisteredNode           5m41s                  node-controller  Node multinode-025000-m03 event: Registered Node multinode-025000-m03 in Controller
	I0612 15:03:40.262427   13752 command_runner.go:130] >   Normal  NodeReady                5m34s                  kubelet          Node multinode-025000-m03 status is now: NodeReady
	I0612 15:03:40.262526   13752 command_runner.go:130] >   Normal  NodeNotReady             3m55s                  node-controller  Node multinode-025000-m03 status is now: NodeNotReady
	I0612 15:03:40.262526   13752 command_runner.go:130] >   Normal  RegisteredNode           56s                    node-controller  Node multinode-025000-m03 event: Registered Node multinode-025000-m03 in Controller
	I0612 15:03:40.269958   13752 logs.go:123] Gathering logs for kube-apiserver [bbe2d2e51b5f] ...
	I0612 15:03:40.269958   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe2d2e51b5f"
	I0612 15:03:40.294743   13752 command_runner.go:130] ! I0612 22:02:28.032945       1 options.go:221] external host was not specified, using 172.23.200.184
	I0612 15:03:40.294743   13752 command_runner.go:130] ! I0612 22:02:28.036290       1 server.go:148] Version: v1.30.1
	I0612 15:03:40.294743   13752 command_runner.go:130] ! I0612 22:02:28.036339       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:40.306847   13752 command_runner.go:130] ! I0612 22:02:28.916544       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0612 15:03:40.306847   13752 command_runner.go:130] ! I0612 22:02:28.917947       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0612 15:03:40.306847   13752 command_runner.go:130] ! I0612 22:02:28.921952       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0612 15:03:40.306928   13752 command_runner.go:130] ! I0612 22:02:28.922146       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0612 15:03:40.306928   13752 command_runner.go:130] ! I0612 22:02:28.922426       1 instance.go:299] Using reconciler: lease
	I0612 15:03:40.306928   13752 command_runner.go:130] ! I0612 22:02:29.570201       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0612 15:03:40.307121   13752 command_runner.go:130] ! W0612 22:02:29.570355       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:40.307121   13752 command_runner.go:130] ! I0612 22:02:29.801222       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0612 15:03:40.307185   13752 command_runner.go:130] ! I0612 22:02:29.801702       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0612 15:03:40.307185   13752 command_runner.go:130] ! I0612 22:02:30.046166       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0612 15:03:40.307185   13752 command_runner.go:130] ! I0612 22:02:30.216981       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0612 15:03:40.307243   13752 command_runner.go:130] ! I0612 22:02:30.231997       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0612 15:03:40.307286   13752 command_runner.go:130] ! W0612 22:02:30.232097       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:40.307286   13752 command_runner.go:130] ! W0612 22:02:30.232107       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:40.307355   13752 command_runner.go:130] ! I0612 22:02:30.232792       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0612 15:03:40.307355   13752 command_runner.go:130] ! W0612 22:02:30.232881       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:40.307396   13752 command_runner.go:130] ! I0612 22:02:30.233864       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0612 15:03:40.307396   13752 command_runner.go:130] ! I0612 22:02:30.235099       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0612 15:03:40.307396   13752 command_runner.go:130] ! W0612 22:02:30.235211       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0612 15:03:40.307453   13752 command_runner.go:130] ! W0612 22:02:30.235220       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0612 15:03:40.307492   13752 command_runner.go:130] ! I0612 22:02:30.237278       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0612 15:03:40.307492   13752 command_runner.go:130] ! W0612 22:02:30.237314       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0612 15:03:40.307526   13752 command_runner.go:130] ! I0612 22:02:30.238451       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0612 15:03:40.307526   13752 command_runner.go:130] ! W0612 22:02:30.238555       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:40.307564   13752 command_runner.go:130] ! W0612 22:02:30.238564       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:40.307564   13752 command_runner.go:130] ! I0612 22:02:30.239199       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.239289       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.239352       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! I0612 22:02:30.239881       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0612 15:03:40.307590   13752 command_runner.go:130] ! I0612 22:02:30.242982       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.243157       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.243324       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! I0612 22:02:30.245920       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.246121       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.246235       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! I0612 22:02:30.249402       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.249562       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! I0612 22:02:30.255420       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.255587       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.255759       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! I0612 22:02:30.257021       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.257206       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.257308       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! I0612 22:02:30.269872       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.270105       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.270312       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! I0612 22:02:30.272005       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0612 15:03:40.307590   13752 command_runner.go:130] ! I0612 22:02:30.273608       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.273714       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.273724       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! I0612 22:02:30.277668       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.277779       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.277789       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! I0612 22:02:30.280767       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.280916       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.280928       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! I0612 22:02:30.281776       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0612 15:03:40.307590   13752 command_runner.go:130] ! W0612 22:02:30.281806       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:40.307590   13752 command_runner.go:130] ! I0612 22:02:30.296752       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0612 15:03:40.308202   13752 command_runner.go:130] ! W0612 22:02:30.296810       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:40.308202   13752 command_runner.go:130] ! I0612 22:02:30.901606       1 secure_serving.go:213] Serving securely on [::]:8443
	I0612 15:03:40.308255   13752 command_runner.go:130] ! I0612 22:02:30.901766       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0612 15:03:40.308255   13752 command_runner.go:130] ! I0612 22:02:30.903281       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0612 15:03:40.308336   13752 command_runner.go:130] ! I0612 22:02:30.903373       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0612 15:03:40.308336   13752 command_runner.go:130] ! I0612 22:02:30.903401       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0612 15:03:40.308336   13752 command_runner.go:130] ! I0612 22:02:30.903987       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0612 15:03:40.308382   13752 command_runner.go:130] ! I0612 22:02:30.904124       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0612 15:03:40.308382   13752 command_runner.go:130] ! I0612 22:02:30.904843       1 aggregator.go:163] waiting for initial CRD sync...
	I0612 15:03:40.308452   13752 command_runner.go:130] ! I0612 22:02:30.905095       1 controller.go:78] Starting OpenAPI AggregationController
	I0612 15:03:40.308452   13752 command_runner.go:130] ! I0612 22:02:30.906424       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0612 15:03:40.308508   13752 command_runner.go:130] ! I0612 22:02:30.901780       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0612 15:03:40.308508   13752 command_runner.go:130] ! I0612 22:02:30.907108       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0612 15:03:40.308508   13752 command_runner.go:130] ! I0612 22:02:30.907337       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0612 15:03:40.308576   13752 command_runner.go:130] ! I0612 22:02:30.901790       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0612 15:03:40.308616   13752 command_runner.go:130] ! I0612 22:02:30.901800       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 15:03:40.308616   13752 command_runner.go:130] ! I0612 22:02:30.909555       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0612 15:03:40.308616   13752 command_runner.go:130] ! I0612 22:02:30.909699       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0612 15:03:40.308678   13752 command_runner.go:130] ! I0612 22:02:30.910003       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0612 15:03:40.308678   13752 command_runner.go:130] ! I0612 22:02:30.911734       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0612 15:03:40.308678   13752 command_runner.go:130] ! I0612 22:02:30.911846       1 controller.go:116] Starting legacy_token_tracking_controller
	I0612 15:03:40.308678   13752 command_runner.go:130] ! I0612 22:02:30.911861       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0612 15:03:40.308740   13752 command_runner.go:130] ! I0612 22:02:30.912590       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0612 15:03:40.308740   13752 command_runner.go:130] ! I0612 22:02:30.912666       1 available_controller.go:423] Starting AvailableConditionController
	I0612 15:03:40.308740   13752 command_runner.go:130] ! I0612 22:02:30.912673       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0612 15:03:40.308816   13752 command_runner.go:130] ! I0612 22:02:30.913776       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0612 15:03:40.308816   13752 command_runner.go:130] ! I0612 22:02:30.953613       1 controller.go:139] Starting OpenAPI controller
	I0612 15:03:40.308816   13752 command_runner.go:130] ! I0612 22:02:30.953929       1 controller.go:87] Starting OpenAPI V3 controller
	I0612 15:03:40.308816   13752 command_runner.go:130] ! I0612 22:02:30.954278       1 naming_controller.go:291] Starting NamingConditionController
	I0612 15:03:40.308816   13752 command_runner.go:130] ! I0612 22:02:30.954516       1 establishing_controller.go:76] Starting EstablishingController
	I0612 15:03:40.308902   13752 command_runner.go:130] ! I0612 22:02:30.954966       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0612 15:03:40.308902   13752 command_runner.go:130] ! I0612 22:02:30.955230       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0612 15:03:40.308902   13752 command_runner.go:130] ! I0612 22:02:30.955507       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0612 15:03:40.308902   13752 command_runner.go:130] ! I0612 22:02:31.003418       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0612 15:03:40.308973   13752 command_runner.go:130] ! I0612 22:02:31.009966       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0612 15:03:40.308973   13752 command_runner.go:130] ! I0612 22:02:31.010019       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0612 15:03:40.308973   13752 command_runner.go:130] ! I0612 22:02:31.010029       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0612 15:03:40.308973   13752 command_runner.go:130] ! I0612 22:02:31.010400       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0612 15:03:40.309038   13752 command_runner.go:130] ! I0612 22:02:31.011993       1 shared_informer.go:320] Caches are synced for configmaps
	I0612 15:03:40.309038   13752 command_runner.go:130] ! I0612 22:02:31.012756       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0612 15:03:40.309085   13752 command_runner.go:130] ! I0612 22:02:31.017182       1 aggregator.go:165] initial CRD sync complete...
	I0612 15:03:40.309436   13752 command_runner.go:130] ! I0612 22:02:31.017223       1 autoregister_controller.go:141] Starting autoregister controller
	I0612 15:03:40.309436   13752 command_runner.go:130] ! I0612 22:02:31.017231       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0612 15:03:40.309477   13752 command_runner.go:130] ! I0612 22:02:31.017238       1 cache.go:39] Caches are synced for autoregister controller
	I0612 15:03:40.309477   13752 command_runner.go:130] ! I0612 22:02:31.018109       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0612 15:03:40.309519   13752 command_runner.go:130] ! I0612 22:02:31.018524       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0612 15:03:40.309563   13752 command_runner.go:130] ! I0612 22:02:31.019519       1 policy_source.go:224] refreshing policies
	I0612 15:03:40.309563   13752 command_runner.go:130] ! I0612 22:02:31.020420       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0612 15:03:40.309605   13752 command_runner.go:130] ! I0612 22:02:31.091331       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0612 15:03:40.309605   13752 command_runner.go:130] ! I0612 22:02:31.909532       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0612 15:03:40.309648   13752 command_runner.go:130] ! W0612 22:02:32.355789       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.23.198.154 172.23.200.184]
	I0612 15:03:40.309648   13752 command_runner.go:130] ! I0612 22:02:32.358485       1 controller.go:615] quota admission added evaluator for: endpoints
	I0612 15:03:40.309694   13752 command_runner.go:130] ! I0612 22:02:32.377254       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0612 15:03:40.309694   13752 command_runner.go:130] ! I0612 22:02:33.727670       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0612 15:03:40.309760   13752 command_runner.go:130] ! I0612 22:02:34.008881       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0612 15:03:40.309760   13752 command_runner.go:130] ! I0612 22:02:34.034607       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0612 15:03:40.309800   13752 command_runner.go:130] ! I0612 22:02:34.157870       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0612 15:03:40.309800   13752 command_runner.go:130] ! I0612 22:02:34.176471       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0612 15:03:40.309853   13752 command_runner.go:130] ! W0612 22:02:52.350035       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.23.200.184]
	I0612 15:03:40.317267   13752 logs.go:123] Gathering logs for kube-proxy [227a905829b0] ...
	I0612 15:03:40.317267   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227a905829b0"
	I0612 15:03:40.337007   13752 command_runner.go:130] ! I0612 22:02:33.538961       1 server_linux.go:69] "Using iptables proxy"
	I0612 15:03:40.346991   13752 command_runner.go:130] ! I0612 22:02:33.585761       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.200.184"]
	I0612 15:03:40.346991   13752 command_runner.go:130] ! I0612 22:02:33.754056       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 15:03:40.346991   13752 command_runner.go:130] ! I0612 22:02:33.754118       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 15:03:40.346991   13752 command_runner.go:130] ! I0612 22:02:33.754141       1 server_linux.go:165] "Using iptables Proxier"
	I0612 15:03:40.347126   13752 command_runner.go:130] ! I0612 22:02:33.765449       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 15:03:40.347162   13752 command_runner.go:130] ! I0612 22:02:33.766192       1 server.go:872] "Version info" version="v1.30.1"
	I0612 15:03:40.347162   13752 command_runner.go:130] ! I0612 22:02:33.766246       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:40.347162   13752 command_runner.go:130] ! I0612 22:02:33.769980       1 config.go:192] "Starting service config controller"
	I0612 15:03:40.347282   13752 command_runner.go:130] ! I0612 22:02:33.770461       1 config.go:101] "Starting endpoint slice config controller"
	I0612 15:03:40.347333   13752 command_runner.go:130] ! I0612 22:02:33.770493       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 15:03:40.347333   13752 command_runner.go:130] ! I0612 22:02:33.770630       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 15:03:40.347388   13752 command_runner.go:130] ! I0612 22:02:33.773852       1 config.go:319] "Starting node config controller"
	I0612 15:03:40.347433   13752 command_runner.go:130] ! I0612 22:02:33.773944       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 15:03:40.347471   13752 command_runner.go:130] ! I0612 22:02:33.870743       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0612 15:03:40.347510   13752 command_runner.go:130] ! I0612 22:02:33.870698       1 shared_informer.go:320] Caches are synced for service config
	I0612 15:03:40.347510   13752 command_runner.go:130] ! I0612 22:02:33.882534       1 shared_informer.go:320] Caches are synced for node config
	I0612 15:03:40.350077   13752 logs.go:123] Gathering logs for kubelet ...
	I0612 15:03:40.350164   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 15:03:40.379682   13752 command_runner.go:130] > Jun 12 22:02:21 multinode-025000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0612 15:03:40.380450   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1381]: I0612 22:02:22.063456    1381 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0612 15:03:40.380450   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1381]: I0612 22:02:22.064093    1381 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:40.380450   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1381]: I0612 22:02:22.064387    1381 server.go:927] "Client rotation is on, will bootstrap in background"
	I0612 15:03:40.380450   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1381]: E0612 22:02:22.065868    1381 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0612 15:03:40.380450   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0612 15:03:40.380450   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0612 15:03:40.380672   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0612 15:03:40.380672   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0612 15:03:40.380672   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0612 15:03:40.380672   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1437]: I0612 22:02:22.789327    1437 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0612 15:03:40.380672   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1437]: I0612 22:02:22.789465    1437 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:40.380672   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1437]: I0612 22:02:22.790480    1437 server.go:927] "Client rotation is on, will bootstrap in background"
	I0612 15:03:40.380672   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1437]: E0612 22:02:22.790564    1437 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0612 15:03:40.380672   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0612 15:03:40.380874   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0612 15:03:40.380874   13752 command_runner.go:130] > Jun 12 22:02:23 multinode-025000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0612 15:03:40.380874   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0612 15:03:40.380874   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.414046    1517 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0612 15:03:40.380998   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.414147    1517 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:40.380998   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.414632    1517 server.go:927] "Client rotation is on, will bootstrap in background"
	I0612 15:03:40.380998   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.416608    1517 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0612 15:03:40.380998   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.437750    1517 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0612 15:03:40.380998   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.458497    1517 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0612 15:03:40.381143   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.458849    1517 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0612 15:03:40.381143   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.460038    1517 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0612 15:03:40.381300   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.460095    1517 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-025000","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0612 15:03:40.381344   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.464057    1517 topology_manager.go:138] "Creating topology manager with none policy"
	I0612 15:03:40.381380   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.464080    1517 container_manager_linux.go:301] "Creating device plugin manager"
	I0612 15:03:40.381380   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.464924    1517 state_mem.go:36] "Initialized new in-memory state store"
	I0612 15:03:40.381380   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.466519    1517 kubelet.go:400] "Attempting to sync node with API server"
	I0612 15:03:40.381380   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.466546    1517 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0612 15:03:40.381551   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.466613    1517 kubelet.go:312] "Adding apiserver pod source"
	I0612 15:03:40.381551   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.467352    1517 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0612 15:03:40.381551   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: W0612 22:02:25.471384    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-025000&limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:40.381643   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.471502    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-025000&limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:40.381643   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.471869    1517 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.1.4" apiVersion="v1"
	I0612 15:03:40.381643   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.477415    1517 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0612 15:03:40.381729   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: W0612 22:02:25.478424    1517 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0612 15:03:40.381729   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.480523    1517 server.go:1264] "Started kubelet"
	I0612 15:03:40.381729   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: W0612 22:02:25.481568    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:40.381814   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.481666    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:40.381814   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.481865    1517 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0612 15:03:40.381814   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.482789    1517 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0612 15:03:40.381899   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.485497    1517 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0612 15:03:40.381899   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.490040    1517 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.23.200.184:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-025000.17d860d995e00c7b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-025000,UID:multinode-025000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-025000,},FirstTimestamp:2024-06-12 22:02:25.480502395 +0000 UTC m=+0.149388345,LastTimestamp:2024-06-12 22:02:25.480502395 +0000 UTC m=+0.149388345,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-0
25000,}"
	I0612 15:03:40.382008   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.493219    1517 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0612 15:03:40.382008   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.495119    1517 server.go:455] "Adding debug handlers to kubelet server"
	I0612 15:03:40.382008   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.496095    1517 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0612 15:03:40.382008   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.498560    1517 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0612 15:03:40.382008   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.501388    1517 factory.go:221] Registration of the systemd container factory successfully
	I0612 15:03:40.382099   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.501556    1517 factory.go:219] Registration of the crio container factory failed: Get "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)crio%!F(MISSING)crio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0612 15:03:40.382099   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.501657    1517 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0612 15:03:40.382099   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: W0612 22:02:25.510641    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:40.382219   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.510706    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:40.382292   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.521028    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-025000?timeout=10s\": dial tcp 172.23.200.184:8443: connect: connection refused" interval="200ms"
	I0612 15:03:40.382292   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.554579    1517 reconciler.go:26] "Reconciler: start to sync state"
	I0612 15:03:40.382292   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.594809    1517 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0612 15:03:40.382292   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.595077    1517 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0612 15:03:40.382393   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.595178    1517 state_mem.go:36] "Initialized new in-memory state store"
	I0612 15:03:40.382393   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.598081    1517 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0612 15:03:40.382393   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.598418    1517 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0612 15:03:40.382393   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.598595    1517 policy_none.go:49] "None policy: Start"
	I0612 15:03:40.382393   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.600760    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-025000"
	I0612 15:03:40.382469   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.602144    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.200.184:8443: connect: connection refused" node="multinode-025000"
	I0612 15:03:40.382469   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.610755    1517 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0612 15:03:40.382469   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.610783    1517 state_mem.go:35] "Initializing new in-memory state store"
	I0612 15:03:40.382544   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.610843    1517 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0612 15:03:40.382544   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.611758    1517 state_mem.go:75] "Updated machine memory state"
	I0612 15:03:40.382544   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.613995    1517 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0612 15:03:40.382544   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.614216    1517 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0612 15:03:40.382618   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.615027    1517 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0612 15:03:40.382618   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.615636    1517 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0612 15:03:40.382618   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.615685    1517 kubelet.go:2337] "Starting kubelet main sync loop"
	I0612 15:03:40.382618   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.615730    1517 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0612 15:03:40.382712   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.616221    1517 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0612 15:03:40.382712   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: W0612 22:02:25.632621    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:40.382712   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.632711    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:40.382808   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.634150    1517 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-025000\" not found"
	I0612 15:03:40.382808   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.644874    1517 iptables.go:577] "Could not set up iptables canary" err=<
	I0612 15:03:40.382889   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0612 15:03:40.382889   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0612 15:03:40.382889   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0612 15:03:40.382889   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0612 15:03:40.382968   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.717070    1517 topology_manager.go:215] "Topology Admit Handler" podUID="d6071cd4356268889f798790dc93ce06" podNamespace="kube-system" podName="kube-apiserver-multinode-025000"
	I0612 15:03:40.382968   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.719714    1517 topology_manager.go:215] "Topology Admit Handler" podUID="88de11d8b1aaec126153d44e87c4b5dd" podNamespace="kube-system" podName="kube-controller-manager-multinode-025000"
	I0612 15:03:40.383082   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.720740    1517 topology_manager.go:215] "Topology Admit Handler" podUID="de62e7fd7d0feea82620e745032c1a67" podNamespace="kube-system" podName="kube-scheduler-multinode-025000"
	I0612 15:03:40.383082   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.722295    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-025000?timeout=10s\": dial tcp 172.23.200.184:8443: connect: connection refused" interval="400ms"
	I0612 15:03:40.383082   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.724629    1517 topology_manager.go:215] "Topology Admit Handler" podUID="7b6b5637642f3d915c0db1461c7074e6" podNamespace="kube-system" podName="etcd-multinode-025000"
	I0612 15:03:40.383177   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.725657    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fad98f611536b15941d0f49c694b6b6c39318bca8a66620735a88a81a12d3610"
	I0612 15:03:40.383177   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.725708    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb4351fab502e49592d49234119b810b53c5916eaf732d4ba148b3ad1eed4e6a"
	I0612 15:03:40.383177   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.725720    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b9e051df48486e732da2c72bf2d0e3ec93cf8774632ecedd8825e656ba04a93"
	I0612 15:03:40.383258   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.725728    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2784305b1d5e9a088f0b73ff004b2d9eca305d397de3d7b9912638323d7c66b2"
	I0612 15:03:40.383258   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.725737    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40443305b24f54fea9235d98bfb16f2d550b8914bfa46c0592b5c24be1ad5569"
	I0612 15:03:40.383258   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.736677    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9933fdc9ca72b65b57e5b4b996215763431b87f18af45fdc8195252497e1d9a"
	I0612 15:03:40.383354   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.760928    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="894c58e9fe752e78b8e86cbbaabc1b6cc78ebcce37e4fc0bf1d838420f80a94d"
	I0612 15:03:40.383354   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.777475    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84a9b747663ca262bb35bb462ba83da0c104aee08928bd92a44297ee225d4c27"
	I0612 15:03:40.383453   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.794474    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92f2d5f19e95ea2d1cfe140159a55c94f5d809c3b67661196b1e285ac389537f"
	I0612 15:03:40.383453   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.803790    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-025000"
	I0612 15:03:40.383453   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.804820    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.200.184:8443: connect: connection refused" node="multinode-025000"
	I0612 15:03:40.383533   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885533    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/88de11d8b1aaec126153d44e87c4b5dd-ca-certs\") pod \"kube-controller-manager-multinode-025000\" (UID: \"88de11d8b1aaec126153d44e87c4b5dd\") " pod="kube-system/kube-controller-manager-multinode-025000"
	I0612 15:03:40.383533   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885705    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d6071cd4356268889f798790dc93ce06-ca-certs\") pod \"kube-apiserver-multinode-025000\" (UID: \"d6071cd4356268889f798790dc93ce06\") " pod="kube-system/kube-apiserver-multinode-025000"
	I0612 15:03:40.383611   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885746    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d6071cd4356268889f798790dc93ce06-k8s-certs\") pod \"kube-apiserver-multinode-025000\" (UID: \"d6071cd4356268889f798790dc93ce06\") " pod="kube-system/kube-apiserver-multinode-025000"
	I0612 15:03:40.383611   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885768    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/88de11d8b1aaec126153d44e87c4b5dd-k8s-certs\") pod \"kube-controller-manager-multinode-025000\" (UID: \"88de11d8b1aaec126153d44e87c4b5dd\") " pod="kube-system/kube-controller-manager-multinode-025000"
	I0612 15:03:40.383705   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885803    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/88de11d8b1aaec126153d44e87c4b5dd-kubeconfig\") pod \"kube-controller-manager-multinode-025000\" (UID: \"88de11d8b1aaec126153d44e87c4b5dd\") " pod="kube-system/kube-controller-manager-multinode-025000"
	I0612 15:03:40.383782   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885844    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/88de11d8b1aaec126153d44e87c4b5dd-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-025000\" (UID: \"88de11d8b1aaec126153d44e87c4b5dd\") " pod="kube-system/kube-controller-manager-multinode-025000"
	I0612 15:03:40.383782   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885869    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/de62e7fd7d0feea82620e745032c1a67-kubeconfig\") pod \"kube-scheduler-multinode-025000\" (UID: \"de62e7fd7d0feea82620e745032c1a67\") " pod="kube-system/kube-scheduler-multinode-025000"
	I0612 15:03:40.383877   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885941    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/7b6b5637642f3d915c0db1461c7074e6-etcd-certs\") pod \"etcd-multinode-025000\" (UID: \"7b6b5637642f3d915c0db1461c7074e6\") " pod="kube-system/etcd-multinode-025000"
	I0612 15:03:40.383877   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885970    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/7b6b5637642f3d915c0db1461c7074e6-etcd-data\") pod \"etcd-multinode-025000\" (UID: \"7b6b5637642f3d915c0db1461c7074e6\") " pod="kube-system/etcd-multinode-025000"
	I0612 15:03:40.383956   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885997    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d6071cd4356268889f798790dc93ce06-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-025000\" (UID: \"d6071cd4356268889f798790dc93ce06\") " pod="kube-system/kube-apiserver-multinode-025000"
	I0612 15:03:40.384036   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.886023    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/88de11d8b1aaec126153d44e87c4b5dd-flexvolume-dir\") pod \"kube-controller-manager-multinode-025000\" (UID: \"88de11d8b1aaec126153d44e87c4b5dd\") " pod="kube-system/kube-controller-manager-multinode-025000"
	I0612 15:03:40.384036   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.124157    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-025000?timeout=10s\": dial tcp 172.23.200.184:8443: connect: connection refused" interval="800ms"
	I0612 15:03:40.384036   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: I0612 22:02:26.206204    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-025000"
	I0612 15:03:40.384165   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.207259    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.200.184:8443: connect: connection refused" node="multinode-025000"
	I0612 15:03:40.384165   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: W0612 22:02:26.576346    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-025000&limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:40.384263   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.576490    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-025000&limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:40.384263   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: W0612 22:02:26.832319    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:40.384365   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.832430    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:40.384365   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: W0612 22:02:26.847085    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:40.384365   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.847226    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:40.384479   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: W0612 22:02:26.894179    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:40.384479   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.894251    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:40.384565   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: I0612 22:02:26.910045    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76517193a960ab9d78db3449c72d4b8285bbf321f947b06f8088487d36423fd7"
	I0612 15:03:40.384565   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.925848    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-025000?timeout=10s\": dial tcp 172.23.200.184:8443: connect: connection refused" interval="1.6s"
	I0612 15:03:40.384648   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.967442    1517 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.23.200.184:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-025000.17d860d995e00c7b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-025000,UID:multinode-025000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-025000,},FirstTimestamp:2024-06-12 22:02:25.480502395 +0000 UTC m=+0.149388345,LastTimestamp:2024-06-12 22:02:25.480502395 +0000 UTC m=+0.149388345,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-0
25000,}"
	I0612 15:03:40.384731   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 kubelet[1517]: I0612 22:02:27.008640    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-025000"
	I0612 15:03:40.384731   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 kubelet[1517]: E0612 22:02:27.009541    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.200.184:8443: connect: connection refused" node="multinode-025000"
	I0612 15:03:40.384842   13752 command_runner.go:130] > Jun 12 22:02:28 multinode-025000 kubelet[1517]: I0612 22:02:28.611782    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-025000"
	I0612 15:03:40.384842   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.067503    1517 kubelet_node_status.go:112] "Node was previously registered" node="multinode-025000"
	I0612 15:03:40.384842   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.069193    1517 kubelet_node_status.go:76] "Successfully registered node" node="multinode-025000"
	I0612 15:03:40.384842   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.078543    1517 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0612 15:03:40.384927   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.083746    1517 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0612 15:03:40.384927   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.087512    1517 setters.go:580] "Node became not ready" node="multinode-025000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-12T22:02:31Z","lastTransitionTime":"2024-06-12T22:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0612 15:03:40.384927   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.485482    1517 apiserver.go:52] "Watching apiserver"
	I0612 15:03:40.385023   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.491838    1517 topology_manager.go:215] "Topology Admit Handler" podUID="1f004a05-3f5f-444b-9ac0-88f0e23da904" podNamespace="kube-system" podName="kindnet-bqlg8"
	I0612 15:03:40.385023   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.492246    1517 topology_manager.go:215] "Topology Admit Handler" podUID="10b24fa7-8eea-4fbb-ab18-404e853aa7ab" podNamespace="kube-system" podName="kube-proxy-47lr8"
	I0612 15:03:40.385023   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.493249    1517 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-025000" podUID="6b429685-b322-4b00-83fc-743786ff40e1"
	I0612 15:03:40.385139   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.494355    1517 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-025000" podUID="630bafc4-4576-4974-b638-7ab52dcfec18"
	I0612 15:03:40.385242   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.494642    1517 topology_manager.go:215] "Topology Admit Handler" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-vgcxw"
	I0612 15:03:40.385242   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.494763    1517 topology_manager.go:215] "Topology Admit Handler" podUID="d20f7489-1aa1-44b8-9221-4d1849884be4" podNamespace="kube-system" podName="storage-provisioner"
	I0612 15:03:40.385330   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.494876    1517 topology_manager.go:215] "Topology Admit Handler" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4" podNamespace="default" podName="busybox-fc5497c4f-45qqd"
	I0612 15:03:40.385380   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.495127    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.385428   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.495306    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.385481   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.499353    1517 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0612 15:03:40.385481   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.541672    1517 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-025000"
	I0612 15:03:40.385481   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.557538    1517 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-025000"
	I0612 15:03:40.385481   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593012    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1f004a05-3f5f-444b-9ac0-88f0e23da904-cni-cfg\") pod \"kindnet-bqlg8\" (UID: \"1f004a05-3f5f-444b-9ac0-88f0e23da904\") " pod="kube-system/kindnet-bqlg8"
	I0612 15:03:40.385481   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593075    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10b24fa7-8eea-4fbb-ab18-404e853aa7ab-lib-modules\") pod \"kube-proxy-47lr8\" (UID: \"10b24fa7-8eea-4fbb-ab18-404e853aa7ab\") " pod="kube-system/kube-proxy-47lr8"
	I0612 15:03:40.385481   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593188    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f004a05-3f5f-444b-9ac0-88f0e23da904-lib-modules\") pod \"kindnet-bqlg8\" (UID: \"1f004a05-3f5f-444b-9ac0-88f0e23da904\") " pod="kube-system/kindnet-bqlg8"
	I0612 15:03:40.385481   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593684    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d20f7489-1aa1-44b8-9221-4d1849884be4-tmp\") pod \"storage-provisioner\" (UID: \"d20f7489-1aa1-44b8-9221-4d1849884be4\") " pod="kube-system/storage-provisioner"
	I0612 15:03:40.385481   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593711    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f004a05-3f5f-444b-9ac0-88f0e23da904-xtables-lock\") pod \"kindnet-bqlg8\" (UID: \"1f004a05-3f5f-444b-9ac0-88f0e23da904\") " pod="kube-system/kindnet-bqlg8"
	I0612 15:03:40.385481   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593752    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10b24fa7-8eea-4fbb-ab18-404e853aa7ab-xtables-lock\") pod \"kube-proxy-47lr8\" (UID: \"10b24fa7-8eea-4fbb-ab18-404e853aa7ab\") " pod="kube-system/kube-proxy-47lr8"
	I0612 15:03:40.385481   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.594460    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:40.385481   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.594613    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:02:32.094549489 +0000 UTC m=+6.763435539 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:40.385481   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.622682    1517 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04dcbc8e258f964f689941b6844769d9" path="/var/lib/kubelet/pods/04dcbc8e258f964f689941b6844769d9/volumes"
	I0612 15:03:40.385481   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.623801    1517 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="610414aa8160848c0b6b79ea0a700b83" path="/var/lib/kubelet/pods/610414aa8160848c0b6b79ea0a700b83/volumes"
	I0612 15:03:40.385481   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.626972    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.385481   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.627014    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.385481   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.627132    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:02:32.127114564 +0000 UTC m=+6.796000614 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.385481   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.673848    1517 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-025000" podStartSLOduration=0.673800971 podStartE2EDuration="673.800971ms" podCreationTimestamp="2024-06-12 22:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-12 22:02:31.632162175 +0000 UTC m=+6.301048225" watchObservedRunningTime="2024-06-12 22:02:31.673800971 +0000 UTC m=+6.342686921"
	I0612 15:03:40.386059   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.674234    1517 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-025000" podStartSLOduration=0.674226172 podStartE2EDuration="674.226172ms" podCreationTimestamp="2024-06-12 22:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-12 22:02:31.67337587 +0000 UTC m=+6.342261920" watchObservedRunningTime="2024-06-12 22:02:31.674226172 +0000 UTC m=+6.343112222"
	I0612 15:03:40.386059   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: E0612 22:02:32.099190    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:40.386059   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: E0612 22:02:32.099284    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:02:33.099266752 +0000 UTC m=+7.768152702 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:40.386059   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: E0612 22:02:32.199774    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.386174   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: E0612 22:02:32.199808    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.386212   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: E0612 22:02:32.199864    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:02:33.199845384 +0000 UTC m=+7.868731334 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.386268   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: I0612 22:02:32.394461    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5287b61207e62a3ec16408b08af503462a8bed945d441422fd0b733e752d6217"
	I0612 15:03:40.386302   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: I0612 22:02:32.774495    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a20975d81b350d77bb2d9d69d861d19ddbcbab33211643f61e2aaa0d6dc46a9d"
	I0612 15:03:40.386302   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: I0612 22:02:32.791274    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="435c56b0fbbbb46e4b392ac6467c2054ce16271a6b3dad2d53f747f839b4b3cd"
	I0612 15:03:40.386302   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.106313    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:40.386302   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.106394    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:02:35.106375874 +0000 UTC m=+9.775261924 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:40.386302   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.208318    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.386302   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.208375    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.386302   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.208431    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:02:35.208413609 +0000 UTC m=+9.877299559 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.386302   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.617822    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.386302   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.618103    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.386302   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.125562    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:40.386302   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.126376    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:02:39.12633293 +0000 UTC m=+13.795218980 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:40.386302   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.226548    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.386302   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.226607    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.386302   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.226693    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:02:39.226674161 +0000 UTC m=+13.895560111 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.386302   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.616712    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.386302   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.617047    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:40.386879   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.617270    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.386879   13752 command_runner.go:130] > Jun 12 22:02:37 multinode-025000 kubelet[1517]: E0612 22:02:37.618147    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.386988   13752 command_runner.go:130] > Jun 12 22:02:37 multinode-025000 kubelet[1517]: E0612 22:02:37.618607    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.386988   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.164650    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:40.387065   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.164956    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:02:47.164935524 +0000 UTC m=+21.833821574 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.265764    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.266004    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.266086    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:02:47.266062158 +0000 UTC m=+21.934948208 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.616548    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.617577    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:40 multinode-025000 kubelet[1517]: E0612 22:02:40.619032    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:41 multinode-025000 kubelet[1517]: E0612 22:02:41.617010    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:41 multinode-025000 kubelet[1517]: E0612 22:02:41.617816    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:43 multinode-025000 kubelet[1517]: E0612 22:02:43.617105    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:43 multinode-025000 kubelet[1517]: E0612 22:02:43.617755    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:45 multinode-025000 kubelet[1517]: E0612 22:02:45.617112    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:45 multinode-025000 kubelet[1517]: E0612 22:02:45.618034    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:45 multinode-025000 kubelet[1517]: E0612 22:02:45.621402    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.234271    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.234420    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:03:03.234402815 +0000 UTC m=+37.903288765 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.335532    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.335632    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.387098   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.335696    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:03:03.33568009 +0000 UTC m=+38.004566140 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.387674   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.617048    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.387674   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.617530    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.387820   13752 command_runner.go:130] > Jun 12 22:02:49 multinode-025000 kubelet[1517]: E0612 22:02:49.617040    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.387859   13752 command_runner.go:130] > Jun 12 22:02:49 multinode-025000 kubelet[1517]: E0612 22:02:49.617673    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.387859   13752 command_runner.go:130] > Jun 12 22:02:50 multinode-025000 kubelet[1517]: E0612 22:02:50.623368    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:40.387859   13752 command_runner.go:130] > Jun 12 22:02:51 multinode-025000 kubelet[1517]: E0612 22:02:51.616848    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.387859   13752 command_runner.go:130] > Jun 12 22:02:51 multinode-025000 kubelet[1517]: E0612 22:02:51.617656    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.387859   13752 command_runner.go:130] > Jun 12 22:02:53 multinode-025000 kubelet[1517]: E0612 22:02:53.617130    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.387859   13752 command_runner.go:130] > Jun 12 22:02:53 multinode-025000 kubelet[1517]: E0612 22:02:53.617679    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.387859   13752 command_runner.go:130] > Jun 12 22:02:55 multinode-025000 kubelet[1517]: E0612 22:02:55.617082    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.387859   13752 command_runner.go:130] > Jun 12 22:02:55 multinode-025000 kubelet[1517]: E0612 22:02:55.617595    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.387859   13752 command_runner.go:130] > Jun 12 22:02:55 multinode-025000 kubelet[1517]: E0612 22:02:55.624795    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:40.387859   13752 command_runner.go:130] > Jun 12 22:02:57 multinode-025000 kubelet[1517]: E0612 22:02:57.617430    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.387859   13752 command_runner.go:130] > Jun 12 22:02:57 multinode-025000 kubelet[1517]: E0612 22:02:57.618180    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.387859   13752 command_runner.go:130] > Jun 12 22:02:59 multinode-025000 kubelet[1517]: E0612 22:02:59.616577    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.387859   13752 command_runner.go:130] > Jun 12 22:02:59 multinode-025000 kubelet[1517]: E0612 22:02:59.617339    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.387859   13752 command_runner.go:130] > Jun 12 22:03:00 multinode-025000 kubelet[1517]: E0612 22:03:00.626741    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:40.387859   13752 command_runner.go:130] > Jun 12 22:03:01 multinode-025000 kubelet[1517]: E0612 22:03:01.617176    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.387859   13752 command_runner.go:130] > Jun 12 22:03:01 multinode-025000 kubelet[1517]: E0612 22:03:01.617573    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.387859   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: I0612 22:03:03.236005    1517 scope.go:117] "RemoveContainer" containerID="61910369e0d4ba1a5246a686e904c168fc7467d239e475004146ddf2835e8e78"
	I0612 15:03:40.388473   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: I0612 22:03:03.236962    1517 scope.go:117] "RemoveContainer" containerID="3546a5c00321078fed32a806a318f4e56e89801ea54ea9463adf37f82327b38a"
	I0612 15:03:40.388795   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.239739    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d20f7489-1aa1-44b8-9221-4d1849884be4)\"" pod="kube-system/storage-provisioner" podUID="d20f7489-1aa1-44b8-9221-4d1849884be4"
	I0612 15:03:40.388795   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.284341    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:40.388795   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.284420    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:03:35.284401461 +0000 UTC m=+69.953287411 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:40.388795   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.385432    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.388795   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.385531    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.388795   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.385613    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:03:35.385594617 +0000 UTC m=+70.054480667 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:40.388795   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.616668    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.388795   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.617100    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.388795   13752 command_runner.go:130] > Jun 12 22:03:05 multinode-025000 kubelet[1517]: E0612 22:03:05.617214    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.388795   13752 command_runner.go:130] > Jun 12 22:03:05 multinode-025000 kubelet[1517]: E0612 22:03:05.617674    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.388795   13752 command_runner.go:130] > Jun 12 22:03:05 multinode-025000 kubelet[1517]: E0612 22:03:05.628542    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:40.388795   13752 command_runner.go:130] > Jun 12 22:03:07 multinode-025000 kubelet[1517]: E0612 22:03:07.616455    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.389396   13752 command_runner.go:130] > Jun 12 22:03:07 multinode-025000 kubelet[1517]: E0612 22:03:07.617581    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.389396   13752 command_runner.go:130] > Jun 12 22:03:09 multinode-025000 kubelet[1517]: E0612 22:03:09.617093    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:40.389396   13752 command_runner.go:130] > Jun 12 22:03:09 multinode-025000 kubelet[1517]: E0612 22:03:09.617405    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:40.389531   13752 command_runner.go:130] > Jun 12 22:03:13 multinode-025000 kubelet[1517]: I0612 22:03:13.617647    1517 scope.go:117] "RemoveContainer" containerID="3546a5c00321078fed32a806a318f4e56e89801ea54ea9463adf37f82327b38a"
	I0612 15:03:40.389531   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]: I0612 22:03:25.637114    1517 scope.go:117] "RemoveContainer" containerID="0749f44d03561395230c8a60a41853a49502741bf3bcd45acc924d346061f5b0"
	I0612 15:03:40.389570   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]: E0612 22:03:25.663119    1517 iptables.go:577] "Could not set up iptables canary" err=<
	I0612 15:03:40.389570   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0612 15:03:40.389612   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0612 15:03:40.389648   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0612 15:03:40.389697   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0612 15:03:40.389697   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]: I0612 22:03:25.699754    1517 scope.go:117] "RemoveContainer" containerID="2455f315465b9508a3fe1025d7150342eedb3cb09eb5f8fd9b2cbbffe1306db0"
	I0612 15:03:40.431275   13752 logs.go:123] Gathering logs for kube-scheduler [755750ecd1e3] ...
	I0612 15:03:40.431275   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 755750ecd1e3"
	I0612 15:03:40.456107   13752 command_runner.go:130] ! I0612 22:02:28.771072       1 serving.go:380] Generated self-signed cert in-memory
	I0612 15:03:40.460394   13752 command_runner.go:130] ! W0612 22:02:31.003959       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0612 15:03:40.460394   13752 command_runner.go:130] ! W0612 22:02:31.004072       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:40.460516   13752 command_runner.go:130] ! W0612 22:02:31.004087       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0612 15:03:40.460516   13752 command_runner.go:130] ! W0612 22:02:31.004098       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0612 15:03:40.460619   13752 command_runner.go:130] ! I0612 22:02:31.034273       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0612 15:03:40.460668   13752 command_runner.go:130] ! I0612 22:02:31.034440       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:40.460668   13752 command_runner.go:130] ! I0612 22:02:31.039288       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0612 15:03:40.460748   13752 command_runner.go:130] ! I0612 22:02:31.039331       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0612 15:03:40.460748   13752 command_runner.go:130] ! I0612 22:02:31.039699       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0612 15:03:40.460800   13752 command_runner.go:130] ! I0612 22:02:31.040018       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 15:03:40.460841   13752 command_runner.go:130] ! I0612 22:02:31.139849       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0612 15:03:40.463106   13752 logs.go:123] Gathering logs for kube-proxy [c4842faba751] ...
	I0612 15:03:40.463180   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4842faba751"
	I0612 15:03:40.488290   13752 command_runner.go:130] ! I0612 21:39:47.407607       1 server_linux.go:69] "Using iptables proxy"
	I0612 15:03:40.488290   13752 command_runner.go:130] ! I0612 21:39:47.423801       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.198.154"]
	I0612 15:03:40.488290   13752 command_runner.go:130] ! I0612 21:39:47.480061       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 15:03:40.488290   13752 command_runner.go:130] ! I0612 21:39:47.480182       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 15:03:40.488290   13752 command_runner.go:130] ! I0612 21:39:47.480205       1 server_linux.go:165] "Using iptables Proxier"
	I0612 15:03:40.488290   13752 command_runner.go:130] ! I0612 21:39:47.484521       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 15:03:40.488290   13752 command_runner.go:130] ! I0612 21:39:47.485171       1 server.go:872] "Version info" version="v1.30.1"
	I0612 15:03:40.488290   13752 command_runner.go:130] ! I0612 21:39:47.485535       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:40.488290   13752 command_runner.go:130] ! I0612 21:39:47.488126       1 config.go:192] "Starting service config controller"
	I0612 15:03:40.488290   13752 command_runner.go:130] ! I0612 21:39:47.488162       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 15:03:40.488290   13752 command_runner.go:130] ! I0612 21:39:47.488188       1 config.go:101] "Starting endpoint slice config controller"
	I0612 15:03:40.488290   13752 command_runner.go:130] ! I0612 21:39:47.488197       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 15:03:40.488290   13752 command_runner.go:130] ! I0612 21:39:47.488969       1 config.go:319] "Starting node config controller"
	I0612 15:03:40.488290   13752 command_runner.go:130] ! I0612 21:39:47.489001       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 15:03:40.488290   13752 command_runner.go:130] ! I0612 21:39:47.588500       1 shared_informer.go:320] Caches are synced for service config
	I0612 15:03:40.488290   13752 command_runner.go:130] ! I0612 21:39:47.588641       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0612 15:03:40.488290   13752 command_runner.go:130] ! I0612 21:39:47.589226       1 shared_informer.go:320] Caches are synced for node config
	I0612 15:03:40.492327   13752 logs.go:123] Gathering logs for kindnet [4d60d82f6bc5] ...
	I0612 15:03:40.492991   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d60d82f6bc5"
	I0612 15:03:40.536992   13752 command_runner.go:130] ! I0612 21:48:53.982546       1 main.go:227] handling current node
	I0612 15:03:40.537093   13752 command_runner.go:130] ! I0612 21:48:53.982561       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.537093   13752 command_runner.go:130] ! I0612 21:48:53.982568       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.537093   13752 command_runner.go:130] ! I0612 21:48:53.982982       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.537190   13752 command_runner.go:130] ! I0612 21:48:53.983049       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.537190   13752 command_runner.go:130] ! I0612 21:49:03.989649       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.537190   13752 command_runner.go:130] ! I0612 21:49:03.989791       1 main.go:227] handling current node
	I0612 15:03:40.537294   13752 command_runner.go:130] ! I0612 21:49:03.989809       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.537294   13752 command_runner.go:130] ! I0612 21:49:03.989817       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.537294   13752 command_runner.go:130] ! I0612 21:49:03.990195       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.537294   13752 command_runner.go:130] ! I0612 21:49:03.990415       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.537383   13752 command_runner.go:130] ! I0612 21:49:14.000384       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.537383   13752 command_runner.go:130] ! I0612 21:49:14.000493       1 main.go:227] handling current node
	I0612 15:03:40.537383   13752 command_runner.go:130] ! I0612 21:49:14.000507       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.537383   13752 command_runner.go:130] ! I0612 21:49:14.000513       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.537383   13752 command_runner.go:130] ! I0612 21:49:14.000627       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.537467   13752 command_runner.go:130] ! I0612 21:49:14.000640       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.537467   13752 command_runner.go:130] ! I0612 21:49:24.006829       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.537467   13752 command_runner.go:130] ! I0612 21:49:24.006871       1 main.go:227] handling current node
	I0612 15:03:40.537467   13752 command_runner.go:130] ! I0612 21:49:24.006883       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.537467   13752 command_runner.go:130] ! I0612 21:49:24.006889       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.537467   13752 command_runner.go:130] ! I0612 21:49:24.007645       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.537559   13752 command_runner.go:130] ! I0612 21:49:24.007745       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.537559   13752 command_runner.go:130] ! I0612 21:49:34.016679       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.537559   13752 command_runner.go:130] ! I0612 21:49:34.016806       1 main.go:227] handling current node
	I0612 15:03:40.537648   13752 command_runner.go:130] ! I0612 21:49:34.016838       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.537648   13752 command_runner.go:130] ! I0612 21:49:34.016845       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.537648   13752 command_runner.go:130] ! I0612 21:49:34.017149       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.537648   13752 command_runner.go:130] ! I0612 21:49:34.017279       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.537648   13752 command_runner.go:130] ! I0612 21:49:44.025835       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.537648   13752 command_runner.go:130] ! I0612 21:49:44.025933       1 main.go:227] handling current node
	I0612 15:03:40.537737   13752 command_runner.go:130] ! I0612 21:49:44.025947       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.537737   13752 command_runner.go:130] ! I0612 21:49:44.025955       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.537737   13752 command_runner.go:130] ! I0612 21:49:44.026381       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.537737   13752 command_runner.go:130] ! I0612 21:49:44.026533       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.537737   13752 command_runner.go:130] ! I0612 21:49:54.033148       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.537737   13752 command_runner.go:130] ! I0612 21:49:54.033257       1 main.go:227] handling current node
	I0612 15:03:40.537821   13752 command_runner.go:130] ! I0612 21:49:54.033273       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.537821   13752 command_runner.go:130] ! I0612 21:49:54.033281       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.537821   13752 command_runner.go:130] ! I0612 21:49:54.033402       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.537821   13752 command_runner.go:130] ! I0612 21:49:54.033435       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.537997   13752 command_runner.go:130] ! I0612 21:50:04.046279       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.538337   13752 command_runner.go:130] ! I0612 21:50:04.046719       1 main.go:227] handling current node
	I0612 15:03:40.538337   13752 command_runner.go:130] ! I0612 21:50:04.046832       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.538408   13752 command_runner.go:130] ! I0612 21:50:04.047109       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.538408   13752 command_runner.go:130] ! I0612 21:50:04.047537       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.538408   13752 command_runner.go:130] ! I0612 21:50:04.047572       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.538408   13752 command_runner.go:130] ! I0612 21:50:14.064171       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.538408   13752 command_runner.go:130] ! I0612 21:50:14.064216       1 main.go:227] handling current node
	I0612 15:03:40.538408   13752 command_runner.go:130] ! I0612 21:50:14.064230       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.538954   13752 command_runner.go:130] ! I0612 21:50:14.064236       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.538954   13752 command_runner.go:130] ! I0612 21:50:14.064574       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.539057   13752 command_runner.go:130] ! I0612 21:50:14.064665       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.539099   13752 command_runner.go:130] ! I0612 21:50:24.071894       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.539099   13752 command_runner.go:130] ! I0612 21:50:24.071935       1 main.go:227] handling current node
	I0612 15:03:40.539168   13752 command_runner.go:130] ! I0612 21:50:24.071949       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.539213   13752 command_runner.go:130] ! I0612 21:50:24.071955       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.539213   13752 command_runner.go:130] ! I0612 21:50:24.072148       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.539213   13752 command_runner.go:130] ! I0612 21:50:24.072184       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.539213   13752 command_runner.go:130] ! I0612 21:50:34.086428       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.539213   13752 command_runner.go:130] ! I0612 21:50:34.086522       1 main.go:227] handling current node
	I0612 15:03:40.539213   13752 command_runner.go:130] ! I0612 21:50:34.086536       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.539213   13752 command_runner.go:130] ! I0612 21:50:34.086543       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.539213   13752 command_runner.go:130] ! I0612 21:50:34.086690       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.539213   13752 command_runner.go:130] ! I0612 21:50:34.086707       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.539213   13752 command_runner.go:130] ! I0612 21:50:44.093862       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.539213   13752 command_runner.go:130] ! I0612 21:50:44.093905       1 main.go:227] handling current node
	I0612 15:03:40.539213   13752 command_runner.go:130] ! I0612 21:50:44.093919       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.539213   13752 command_runner.go:130] ! I0612 21:50:44.093925       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.539213   13752 command_runner.go:130] ! I0612 21:50:44.094840       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.539213   13752 command_runner.go:130] ! I0612 21:50:44.094916       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.539213   13752 command_runner.go:130] ! I0612 21:50:54.102869       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.541073   13752 command_runner.go:130] ! I0612 21:50:54.103074       1 main.go:227] handling current node
	I0612 15:03:40.541355   13752 command_runner.go:130] ! I0612 21:50:54.103091       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.541459   13752 command_runner.go:130] ! I0612 21:50:54.103100       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.541459   13752 command_runner.go:130] ! I0612 21:50:54.103237       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.541459   13752 command_runner.go:130] ! I0612 21:50:54.103276       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.541582   13752 command_runner.go:130] ! I0612 21:51:04.110391       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.541582   13752 command_runner.go:130] ! I0612 21:51:04.110501       1 main.go:227] handling current node
	I0612 15:03:40.541704   13752 command_runner.go:130] ! I0612 21:51:04.110517       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.541704   13752 command_runner.go:130] ! I0612 21:51:04.110556       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.541704   13752 command_runner.go:130] ! I0612 21:51:04.110721       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.541704   13752 command_runner.go:130] ! I0612 21:51:04.110794       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.541704   13752 command_runner.go:130] ! I0612 21:51:14.121126       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.541704   13752 command_runner.go:130] ! I0612 21:51:14.121263       1 main.go:227] handling current node
	I0612 15:03:40.541829   13752 command_runner.go:130] ! I0612 21:51:14.121280       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.541829   13752 command_runner.go:130] ! I0612 21:51:14.121288       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.541829   13752 command_runner.go:130] ! I0612 21:51:14.121430       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.541829   13752 command_runner.go:130] ! I0612 21:51:14.121462       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.541953   13752 command_runner.go:130] ! I0612 21:51:24.131659       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.542570   13752 command_runner.go:130] ! I0612 21:51:24.131690       1 main.go:227] handling current node
	I0612 15:03:40.542570   13752 command_runner.go:130] ! I0612 21:51:24.131702       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.542570   13752 command_runner.go:130] ! I0612 21:51:24.131708       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.542704   13752 command_runner.go:130] ! I0612 21:51:24.132287       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.542704   13752 command_runner.go:130] ! I0612 21:51:24.132319       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.542704   13752 command_runner.go:130] ! I0612 21:51:34.139419       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.542704   13752 command_runner.go:130] ! I0612 21:51:34.139546       1 main.go:227] handling current node
	I0612 15:03:40.542704   13752 command_runner.go:130] ! I0612 21:51:34.139561       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.542704   13752 command_runner.go:130] ! I0612 21:51:34.139570       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.542817   13752 command_runner.go:130] ! I0612 21:51:34.140149       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.542817   13752 command_runner.go:130] ! I0612 21:51:34.140253       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.542864   13752 command_runner.go:130] ! I0612 21:51:44.152295       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.542864   13752 command_runner.go:130] ! I0612 21:51:44.152430       1 main.go:227] handling current node
	I0612 15:03:40.542892   13752 command_runner.go:130] ! I0612 21:51:44.152464       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.542909   13752 command_runner.go:130] ! I0612 21:51:44.152471       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.542909   13752 command_runner.go:130] ! I0612 21:51:44.153262       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.542909   13752 command_runner.go:130] ! I0612 21:51:44.153471       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.542909   13752 command_runner.go:130] ! I0612 21:51:54.160684       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.542984   13752 command_runner.go:130] ! I0612 21:51:54.160938       1 main.go:227] handling current node
	I0612 15:03:40.542984   13752 command_runner.go:130] ! I0612 21:51:54.160953       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.542984   13752 command_runner.go:130] ! I0612 21:51:54.160960       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.542984   13752 command_runner.go:130] ! I0612 21:51:54.161457       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.542984   13752 command_runner.go:130] ! I0612 21:51:54.161482       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.543084   13752 command_runner.go:130] ! I0612 21:52:04.170421       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.543084   13752 command_runner.go:130] ! I0612 21:52:04.170526       1 main.go:227] handling current node
	I0612 15:03:40.543172   13752 command_runner.go:130] ! I0612 21:52:04.170541       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.543172   13752 command_runner.go:130] ! I0612 21:52:04.170548       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.543172   13752 command_runner.go:130] ! I0612 21:52:04.171076       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.543172   13752 command_runner.go:130] ! I0612 21:52:04.171113       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.543172   13752 command_runner.go:130] ! I0612 21:52:14.180403       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.543172   13752 command_runner.go:130] ! I0612 21:52:14.180490       1 main.go:227] handling current node
	I0612 15:03:40.543172   13752 command_runner.go:130] ! I0612 21:52:14.180508       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.543172   13752 command_runner.go:130] ! I0612 21:52:14.180516       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.543172   13752 command_runner.go:130] ! I0612 21:52:14.180994       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.543394   13752 command_runner.go:130] ! I0612 21:52:14.181032       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.543394   13752 command_runner.go:130] ! I0612 21:52:24.195314       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.543394   13752 command_runner.go:130] ! I0612 21:52:24.195545       1 main.go:227] handling current node
	I0612 15:03:40.543394   13752 command_runner.go:130] ! I0612 21:52:24.195735       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.543394   13752 command_runner.go:130] ! I0612 21:52:24.195807       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.543394   13752 command_runner.go:130] ! I0612 21:52:24.196026       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.543518   13752 command_runner.go:130] ! I0612 21:52:24.196064       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.543518   13752 command_runner.go:130] ! I0612 21:52:34.202013       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.543518   13752 command_runner.go:130] ! I0612 21:52:34.202806       1 main.go:227] handling current node
	I0612 15:03:40.543518   13752 command_runner.go:130] ! I0612 21:52:34.202932       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.543518   13752 command_runner.go:130] ! I0612 21:52:34.203029       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.543518   13752 command_runner.go:130] ! I0612 21:52:34.203265       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.543518   13752 command_runner.go:130] ! I0612 21:52:34.203299       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.543610   13752 command_runner.go:130] ! I0612 21:52:44.209271       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.543610   13752 command_runner.go:130] ! I0612 21:52:44.209440       1 main.go:227] handling current node
	I0612 15:03:40.543610   13752 command_runner.go:130] ! I0612 21:52:44.209476       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.543610   13752 command_runner.go:130] ! I0612 21:52:44.209546       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.543610   13752 command_runner.go:130] ! I0612 21:52:44.209839       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.543708   13752 command_runner.go:130] ! I0612 21:52:44.210283       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.543743   13752 command_runner.go:130] ! I0612 21:52:54.223351       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.543743   13752 command_runner.go:130] ! I0612 21:52:54.223443       1 main.go:227] handling current node
	I0612 15:03:40.543793   13752 command_runner.go:130] ! I0612 21:52:54.223459       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.543793   13752 command_runner.go:130] ! I0612 21:52:54.223466       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.543828   13752 command_runner.go:130] ! I0612 21:52:54.223810       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.543828   13752 command_runner.go:130] ! I0612 21:52:54.223840       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.543828   13752 command_runner.go:130] ! I0612 21:53:04.236876       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.543877   13752 command_runner.go:130] ! I0612 21:53:04.237155       1 main.go:227] handling current node
	I0612 15:03:40.543877   13752 command_runner.go:130] ! I0612 21:53:04.237949       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.543911   13752 command_runner.go:130] ! I0612 21:53:04.238341       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.543933   13752 command_runner.go:130] ! I0612 21:53:04.238673       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.543933   13752 command_runner.go:130] ! I0612 21:53:04.238707       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.543969   13752 command_runner.go:130] ! I0612 21:53:14.245069       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.543969   13752 command_runner.go:130] ! I0612 21:53:14.245110       1 main.go:227] handling current node
	I0612 15:03:40.543969   13752 command_runner.go:130] ! I0612 21:53:14.245122       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.543969   13752 command_runner.go:130] ! I0612 21:53:14.245131       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.543969   13752 command_runner.go:130] ! I0612 21:53:14.245834       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.543969   13752 command_runner.go:130] ! I0612 21:53:14.245932       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.544080   13752 command_runner.go:130] ! I0612 21:53:24.258923       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.544080   13752 command_runner.go:130] ! I0612 21:53:24.258965       1 main.go:227] handling current node
	I0612 15:03:40.544080   13752 command_runner.go:130] ! I0612 21:53:24.258977       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.544080   13752 command_runner.go:130] ! I0612 21:53:24.258983       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.544080   13752 command_runner.go:130] ! I0612 21:53:24.259367       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.544156   13752 command_runner.go:130] ! I0612 21:53:24.259399       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.544156   13752 command_runner.go:130] ! I0612 21:53:34.265573       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.544156   13752 command_runner.go:130] ! I0612 21:53:34.265738       1 main.go:227] handling current node
	I0612 15:03:40.544156   13752 command_runner.go:130] ! I0612 21:53:34.265787       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.544156   13752 command_runner.go:130] ! I0612 21:53:34.265797       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.544231   13752 command_runner.go:130] ! I0612 21:53:34.266180       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.544231   13752 command_runner.go:130] ! I0612 21:53:34.266257       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.544231   13752 command_runner.go:130] ! I0612 21:53:44.278968       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.544231   13752 command_runner.go:130] ! I0612 21:53:44.279173       1 main.go:227] handling current node
	I0612 15:03:40.544231   13752 command_runner.go:130] ! I0612 21:53:44.279207       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.544231   13752 command_runner.go:130] ! I0612 21:53:44.279294       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.544334   13752 command_runner.go:130] ! I0612 21:53:44.279698       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.544334   13752 command_runner.go:130] ! I0612 21:53:44.279829       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.544334   13752 command_runner.go:130] ! I0612 21:53:54.290366       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.544334   13752 command_runner.go:130] ! I0612 21:53:54.290472       1 main.go:227] handling current node
	I0612 15:03:40.544334   13752 command_runner.go:130] ! I0612 21:53:54.290487       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.544334   13752 command_runner.go:130] ! I0612 21:53:54.290494       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.544414   13752 command_runner.go:130] ! I0612 21:53:54.291158       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.544414   13752 command_runner.go:130] ! I0612 21:53:54.291263       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.544414   13752 command_runner.go:130] ! I0612 21:54:04.308014       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.544414   13752 command_runner.go:130] ! I0612 21:54:04.308117       1 main.go:227] handling current node
	I0612 15:03:40.544497   13752 command_runner.go:130] ! I0612 21:54:04.308133       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.544497   13752 command_runner.go:130] ! I0612 21:54:04.308142       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.544497   13752 command_runner.go:130] ! I0612 21:54:04.308605       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.544497   13752 command_runner.go:130] ! I0612 21:54:04.308643       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.544575   13752 command_runner.go:130] ! I0612 21:54:14.316271       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.544575   13752 command_runner.go:130] ! I0612 21:54:14.316380       1 main.go:227] handling current node
	I0612 15:03:40.544575   13752 command_runner.go:130] ! I0612 21:54:14.316396       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.544575   13752 command_runner.go:130] ! I0612 21:54:14.316403       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.544575   13752 command_runner.go:130] ! I0612 21:54:14.316942       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.544575   13752 command_runner.go:130] ! I0612 21:54:14.316959       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.544575   13752 command_runner.go:130] ! I0612 21:54:24.330853       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.544659   13752 command_runner.go:130] ! I0612 21:54:24.331009       1 main.go:227] handling current node
	I0612 15:03:40.544659   13752 command_runner.go:130] ! I0612 21:54:24.331025       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.544659   13752 command_runner.go:130] ! I0612 21:54:24.331033       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.544659   13752 command_runner.go:130] ! I0612 21:54:24.331178       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.544659   13752 command_runner.go:130] ! I0612 21:54:24.331213       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.544738   13752 command_runner.go:130] ! I0612 21:54:34.340396       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.544738   13752 command_runner.go:130] ! I0612 21:54:34.340543       1 main.go:227] handling current node
	I0612 15:03:40.544738   13752 command_runner.go:130] ! I0612 21:54:34.340558       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.544738   13752 command_runner.go:130] ! I0612 21:54:34.340565       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.544738   13752 command_runner.go:130] ! I0612 21:54:34.340924       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.544738   13752 command_runner.go:130] ! I0612 21:54:34.341013       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.544815   13752 command_runner.go:130] ! I0612 21:54:44.347468       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.544815   13752 command_runner.go:130] ! I0612 21:54:44.347599       1 main.go:227] handling current node
	I0612 15:03:40.544815   13752 command_runner.go:130] ! I0612 21:54:44.347614       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.544815   13752 command_runner.go:130] ! I0612 21:54:44.347622       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.544815   13752 command_runner.go:130] ! I0612 21:54:44.348279       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.544893   13752 command_runner.go:130] ! I0612 21:54:44.348396       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.544893   13752 command_runner.go:130] ! I0612 21:54:54.364900       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.544893   13752 command_runner.go:130] ! I0612 21:54:54.365031       1 main.go:227] handling current node
	I0612 15:03:40.544893   13752 command_runner.go:130] ! I0612 21:54:54.365046       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.544893   13752 command_runner.go:130] ! I0612 21:54:54.365054       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.544893   13752 command_runner.go:130] ! I0612 21:54:54.365542       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.544992   13752 command_runner.go:130] ! I0612 21:54:54.365727       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.544992   13752 command_runner.go:130] ! I0612 21:55:04.381041       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.544992   13752 command_runner.go:130] ! I0612 21:55:04.381087       1 main.go:227] handling current node
	I0612 15:03:40.544992   13752 command_runner.go:130] ! I0612 21:55:04.381103       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.544992   13752 command_runner.go:130] ! I0612 21:55:04.381110       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.545112   13752 command_runner.go:130] ! I0612 21:55:04.381700       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.545112   13752 command_runner.go:130] ! I0612 21:55:04.381853       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.545112   13752 command_runner.go:130] ! I0612 21:55:14.395619       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.545112   13752 command_runner.go:130] ! I0612 21:55:14.395666       1 main.go:227] handling current node
	I0612 15:03:40.545112   13752 command_runner.go:130] ! I0612 21:55:14.395679       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.545112   13752 command_runner.go:130] ! I0612 21:55:14.395686       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.545206   13752 command_runner.go:130] ! I0612 21:55:14.396514       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.545206   13752 command_runner.go:130] ! I0612 21:55:14.396536       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.545206   13752 command_runner.go:130] ! I0612 21:55:24.411927       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.545206   13752 command_runner.go:130] ! I0612 21:55:24.412012       1 main.go:227] handling current node
	I0612 15:03:40.545206   13752 command_runner.go:130] ! I0612 21:55:24.412028       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.545206   13752 command_runner.go:130] ! I0612 21:55:24.412036       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.545306   13752 command_runner.go:130] ! I0612 21:55:24.412568       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.545306   13752 command_runner.go:130] ! I0612 21:55:24.412661       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.545306   13752 command_runner.go:130] ! I0612 21:55:34.420011       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.545306   13752 command_runner.go:130] ! I0612 21:55:34.420100       1 main.go:227] handling current node
	I0612 15:03:40.545306   13752 command_runner.go:130] ! I0612 21:55:34.420115       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.545306   13752 command_runner.go:130] ! I0612 21:55:34.420122       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.545395   13752 command_runner.go:130] ! I0612 21:55:34.420481       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.545395   13752 command_runner.go:130] ! I0612 21:55:34.420570       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.545395   13752 command_runner.go:130] ! I0612 21:55:44.432502       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.545395   13752 command_runner.go:130] ! I0612 21:55:44.432604       1 main.go:227] handling current node
	I0612 15:03:40.545395   13752 command_runner.go:130] ! I0612 21:55:44.432620       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.545395   13752 command_runner.go:130] ! I0612 21:55:44.432632       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.545395   13752 command_runner.go:130] ! I0612 21:55:44.432881       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.545395   13752 command_runner.go:130] ! I0612 21:55:44.433061       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.545480   13752 command_runner.go:130] ! I0612 21:55:54.446991       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.545480   13752 command_runner.go:130] ! I0612 21:55:54.447440       1 main.go:227] handling current node
	I0612 15:03:40.545480   13752 command_runner.go:130] ! I0612 21:55:54.447622       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.545480   13752 command_runner.go:130] ! I0612 21:55:54.447655       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.545480   13752 command_runner.go:130] ! I0612 21:55:54.447830       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.545480   13752 command_runner.go:130] ! I0612 21:55:54.447901       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.545480   13752 command_runner.go:130] ! I0612 21:56:04.463393       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.545480   13752 command_runner.go:130] ! I0612 21:56:04.463546       1 main.go:227] handling current node
	I0612 15:03:40.545574   13752 command_runner.go:130] ! I0612 21:56:04.463575       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.545574   13752 command_runner.go:130] ! I0612 21:56:04.463596       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.545574   13752 command_runner.go:130] ! I0612 21:56:04.463900       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.545574   13752 command_runner.go:130] ! I0612 21:56:04.463932       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.545574   13752 command_runner.go:130] ! I0612 21:56:14.477690       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.545574   13752 command_runner.go:130] ! I0612 21:56:14.477837       1 main.go:227] handling current node
	I0612 15:03:40.545574   13752 command_runner.go:130] ! I0612 21:56:14.477852       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.545574   13752 command_runner.go:130] ! I0612 21:56:14.477860       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.545574   13752 command_runner.go:130] ! I0612 21:56:14.478029       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.545662   13752 command_runner.go:130] ! I0612 21:56:14.478096       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.545662   13752 command_runner.go:130] ! I0612 21:56:24.485525       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.545662   13752 command_runner.go:130] ! I0612 21:56:24.485620       1 main.go:227] handling current node
	I0612 15:03:40.545662   13752 command_runner.go:130] ! I0612 21:56:24.485655       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.545662   13752 command_runner.go:130] ! I0612 21:56:24.485663       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.547393   13752 command_runner.go:130] ! I0612 21:56:24.486202       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.547514   13752 command_runner.go:130] ! I0612 21:56:24.486237       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.547545   13752 command_runner.go:130] ! I0612 21:56:34.502904       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.547545   13752 command_runner.go:130] ! I0612 21:56:34.502951       1 main.go:227] handling current node
	I0612 15:03:40.547584   13752 command_runner.go:130] ! I0612 21:56:34.502964       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.547584   13752 command_runner.go:130] ! I0612 21:56:34.502970       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.547584   13752 command_runner.go:130] ! I0612 21:56:34.503088       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:56:34.503684       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:56:44.512292       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:56:44.512356       1 main.go:227] handling current node
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:56:44.512368       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:56:44.512374       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:56:44.512909       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:56:44.513033       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:56:54.520903       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:56:54.521017       1 main.go:227] handling current node
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:56:54.521034       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:56:54.521041       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:56:54.521441       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:56:54.521665       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:04.535531       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:04.535625       1 main.go:227] handling current node
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:04.535665       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:04.535672       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:04.536272       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:04.536355       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:14.559304       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:14.559354       1 main.go:227] handling current node
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:14.559375       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:14.559382       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:14.559735       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:14.560332       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:24.568057       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:24.568103       1 main.go:227] handling current node
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:24.568116       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:24.568122       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:24.568938       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:24.569042       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:34.584121       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:34.584277       1 main.go:227] handling current node
	I0612 15:03:40.547620   13752 command_runner.go:130] ! I0612 21:57:34.584502       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.548165   13752 command_runner.go:130] ! I0612 21:57:34.584607       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.548165   13752 command_runner.go:130] ! I0612 21:57:34.584995       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.548165   13752 command_runner.go:130] ! I0612 21:57:34.585095       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.548215   13752 command_runner.go:130] ! I0612 21:57:44.600201       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.548215   13752 command_runner.go:130] ! I0612 21:57:44.600339       1 main.go:227] handling current node
	I0612 15:03:40.548215   13752 command_runner.go:130] ! I0612 21:57:44.600353       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.548256   13752 command_runner.go:130] ! I0612 21:57:44.600361       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.548256   13752 command_runner.go:130] ! I0612 21:57:44.600842       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:40.548300   13752 command_runner.go:130] ! I0612 21:57:44.600859       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:40.548300   13752 command_runner.go:130] ! I0612 21:57:54.615436       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.548300   13752 command_runner.go:130] ! I0612 21:57:54.615497       1 main.go:227] handling current node
	I0612 15:03:40.548339   13752 command_runner.go:130] ! I0612 21:57:54.615511       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:57:54.615536       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:04.629487       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:04.629657       1 main.go:227] handling current node
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:04.629797       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:04.629891       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:04.630131       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:04.631059       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:04.631221       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.23.206.72 Flags: [] Table: 0} 
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:14.647500       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:14.647527       1 main.go:227] handling current node
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:14.647539       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:14.647544       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:14.647661       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:14.647672       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:24.655905       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:24.656017       1 main.go:227] handling current node
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:24.656064       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:24.656140       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:24.656636       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:24.656721       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:34.670254       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:34.670590       1 main.go:227] handling current node
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:34.670966       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:34.671845       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:34.672269       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:34.672369       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:44.682684       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:44.682854       1 main.go:227] handling current node
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:44.682877       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:44.682887       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:44.683737       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:44.683808       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:54.691077       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:54.691167       1 main.go:227] handling current node
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:54.691199       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:54.691207       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:54.691344       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:58:54.691357       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:59:04.700863       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:59:04.701017       1 main.go:227] handling current node
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:59:04.701032       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.548376   13752 command_runner.go:130] ! I0612 21:59:04.701040       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.548982   13752 command_runner.go:130] ! I0612 21:59:04.701620       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:40.548982   13752 command_runner.go:130] ! I0612 21:59:04.701736       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:40.548982   13752 command_runner.go:130] ! I0612 21:59:14.717668       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.548982   13752 command_runner.go:130] ! I0612 21:59:14.717949       1 main.go:227] handling current node
	I0612 15:03:40.549054   13752 command_runner.go:130] ! I0612 21:59:14.717991       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.549054   13752 command_runner.go:130] ! I0612 21:59:14.718050       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:14.718200       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:14.718263       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:24.724311       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:24.724441       1 main.go:227] handling current node
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:24.724456       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:24.724464       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:24.724785       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:24.724853       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:34.737266       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:34.737410       1 main.go:227] handling current node
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:34.737425       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:34.737432       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:34.738157       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:34.738269       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:44.746123       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:44.746292       1 main.go:227] handling current node
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:44.746313       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:44.746332       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:44.746856       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:44.746925       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:54.752611       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:54.752658       1 main.go:227] handling current node
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:54.752671       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:54.752678       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:54.753183       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:40.549175   13752 command_runner.go:130] ! I0612 21:59:54.753277       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:40.560569   13752 logs.go:123] Gathering logs for etcd [6b61f5f6483d] ...
	I0612 15:03:40.560569   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61f5f6483d"
	I0612 15:03:40.585934   13752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-12T22:02:27.594582Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0612 15:03:40.592338   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.595941Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.23.200.184:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.23.200.184:2380","--initial-cluster=multinode-025000=https://172.23.200.184:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.23.200.184:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.23.200.184:2380","--name=multinode-025000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0612 15:03:40.592386   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.596165Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0612 15:03:40.592386   13752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-12T22:02:27.596271Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0612 15:03:40.592452   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.596356Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.23.200.184:2380"]}
	I0612 15:03:40.592498   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.596492Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0612 15:03:40.592498   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.611167Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.23.200.184:2379"]}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.613093Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-025000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.23.200.184:2380"],"listen-peer-urls":["https://172.23.200.184:2380"],"advertise-client-urls":["https://172.23.200.184:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.23.200.184:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"in
itial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.643295Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"27.151363ms"}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.674268Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.702241Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"a7fa2563dcb4b7b8","local-member-id":"b93ef5bd064a9684","commit-index":2039}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.702551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 switched to configuration voters=()"}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.702585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 became follower at term 2"}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.70261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft b93ef5bd064a9684 [peers: [], term: 2, commit: 2039, applied: 0, lastindex: 2039, lastterm: 2]"}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-12T22:02:27.719372Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.724082Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1403}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.735755Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1769}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.743333Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.753311Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"b93ef5bd064a9684","timeout":"7s"}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.755587Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"b93ef5bd064a9684"}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.755671Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"b93ef5bd064a9684","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.758078Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.758939Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.759011Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.759115Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.759495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 switched to configuration voters=(13348376537775904388)"}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.759589Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a7fa2563dcb4b7b8","local-member-id":"b93ef5bd064a9684","added-peer-id":"b93ef5bd064a9684","added-peer-peer-urls":["https://172.23.198.154:2380"]}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.760197Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a7fa2563dcb4b7b8","local-member-id":"b93ef5bd064a9684","cluster-version":"3.5"}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.761198Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0612 15:03:40.592607   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.764395Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0612 15:03:40.593192   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.765492Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b93ef5bd064a9684","initial-advertise-peer-urls":["https://172.23.200.184:2380"],"listen-peer-urls":["https://172.23.200.184:2380"],"advertise-client-urls":["https://172.23.200.184:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.23.200.184:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0612 15:03:40.593192   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.766195Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0612 15:03:40.593246   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.766744Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.23.200.184:2380"}
	I0612 15:03:40.593246   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.767384Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.23.200.184:2380"}
	I0612 15:03:40.593246   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503194Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 is starting a new election at term 2"}
	I0612 15:03:40.593246   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.50332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 became pre-candidate at term 2"}
	I0612 15:03:40.593246   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503351Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 received MsgPreVoteResp from b93ef5bd064a9684 at term 2"}
	I0612 15:03:40.593350   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 became candidate at term 3"}
	I0612 15:03:40.593350   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 received MsgVoteResp from b93ef5bd064a9684 at term 3"}
	I0612 15:03:40.593395   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 became leader at term 3"}
	I0612 15:03:40.593395   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b93ef5bd064a9684 elected leader b93ef5bd064a9684 at term 3"}
	I0612 15:03:40.593395   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.511068Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0612 15:03:40.593445   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.511381Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0612 15:03:40.593488   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.511069Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"b93ef5bd064a9684","local-member-attributes":"{Name:multinode-025000 ClientURLs:[https://172.23.200.184:2379]}","request-path":"/0/members/b93ef5bd064a9684/attributes","cluster-id":"a7fa2563dcb4b7b8","publish-timeout":"7s"}
	I0612 15:03:40.593555   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.512996Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0612 15:03:40.593555   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.513243Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0612 15:03:40.593609   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.514729Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0612 15:03:40.593650   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.515422Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.23.200.184:2379"}
	I0612 15:03:40.600476   13752 logs.go:123] Gathering logs for coredns [26e5daf354e3] ...
	I0612 15:03:40.600476   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26e5daf354e3"
	I0612 15:03:40.628932   13752 command_runner.go:130] > .:53
	I0612 15:03:40.628932   13752 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 9f7dc1bade6b5769fb289c890c4bc60268e74645c2ad6eb7d326d3f775fd92cb51f1ac39274894772e6760c31275de0003978af82f0f289ef8d45827e8140e48
	I0612 15:03:40.628932   13752 command_runner.go:130] > CoreDNS-1.11.1
	I0612 15:03:40.628932   13752 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0612 15:03:40.628932   13752 command_runner.go:130] > [INFO] 127.0.0.1:54952 - 9035 "HINFO IN 225709527310201015.7757756956422223857. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.039110892s
	I0612 15:03:43.155729   13752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 15:03:43.183777   13752 command_runner.go:130] > 1830
	I0612 15:03:43.183809   13752 api_server.go:72] duration metric: took 1m7.3621211s to wait for apiserver process to appear ...
	I0612 15:03:43.183809   13752 api_server.go:88] waiting for apiserver healthz status ...
	I0612 15:03:43.192231   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0612 15:03:43.216346   13752 command_runner.go:130] > bbe2d2e51b5f
	I0612 15:03:43.217414   13752 logs.go:276] 1 containers: [bbe2d2e51b5f]
	I0612 15:03:43.226578   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0612 15:03:43.248473   13752 command_runner.go:130] > 6b61f5f6483d
	I0612 15:03:43.248529   13752 logs.go:276] 1 containers: [6b61f5f6483d]
	I0612 15:03:43.257990   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0612 15:03:43.283767   13752 command_runner.go:130] > 26e5daf354e3
	I0612 15:03:43.285218   13752 command_runner.go:130] > e83cf4eef49e
	I0612 15:03:43.285218   13752 logs.go:276] 2 containers: [26e5daf354e3 e83cf4eef49e]
	I0612 15:03:43.293981   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0612 15:03:43.318070   13752 command_runner.go:130] > 755750ecd1e3
	I0612 15:03:43.318070   13752 command_runner.go:130] > 6b021c195669
	I0612 15:03:43.318070   13752 logs.go:276] 2 containers: [755750ecd1e3 6b021c195669]
	I0612 15:03:43.328230   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0612 15:03:43.350541   13752 command_runner.go:130] > 227a905829b0
	I0612 15:03:43.350541   13752 command_runner.go:130] > c4842faba751
	I0612 15:03:43.352203   13752 logs.go:276] 2 containers: [227a905829b0 c4842faba751]
	I0612 15:03:43.361504   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0612 15:03:43.383671   13752 command_runner.go:130] > 7acc8ff0a931
	I0612 15:03:43.383671   13752 command_runner.go:130] > 685d167da53c
	I0612 15:03:43.384233   13752 logs.go:276] 2 containers: [7acc8ff0a931 685d167da53c]
	I0612 15:03:43.395335   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0612 15:03:43.417367   13752 command_runner.go:130] > cccfd1e9fef5
	I0612 15:03:43.417911   13752 command_runner.go:130] > 4d60d82f6bc5
	I0612 15:03:43.417911   13752 logs.go:276] 2 containers: [cccfd1e9fef5 4d60d82f6bc5]
	I0612 15:03:43.417911   13752 logs.go:123] Gathering logs for coredns [e83cf4eef49e] ...
	I0612 15:03:43.417911   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83cf4eef49e"
	I0612 15:03:43.448914   13752 command_runner.go:130] > .:53
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 9f7dc1bade6b5769fb289c890c4bc60268e74645c2ad6eb7d326d3f775fd92cb51f1ac39274894772e6760c31275de0003978af82f0f289ef8d45827e8140e48
	I0612 15:03:43.449013   13752 command_runner.go:130] > CoreDNS-1.11.1
	I0612 15:03:43.449013   13752 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 127.0.0.1:53490 - 39118 "HINFO IN 4677201826540465335.2322207397622737457. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.048277073s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:49256 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000267302s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:54623 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.08558s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:51804 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.048771085s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:53027 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.100151983s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.1.2:34534 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001199s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.1.2:44985 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000141701s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.1.2:54544 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.0000543s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.1.2:55517 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000123601s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:42995 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099501s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:51839 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.135718274s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:52123 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000304602s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:36740 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000274801s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:48333 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.003287018s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:55754 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000962s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:51695 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000224102s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:49605 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096301s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.1.2:37746 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000283001s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.1.2:54995 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000106501s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.1.2:49201 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077401s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.1.2:60577 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077201s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.1.2:36057 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000107301s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.1.2:43898 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.1.2:49177 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091201s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.1.2:45207 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000584s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:36676 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151001s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:60305 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000305802s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:37468 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000209201s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:34743 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125201s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.1.2:45035 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000240801s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.1.2:42306 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000309601s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.1.2:36509 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000152901s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.1.2:55614 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000545s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:39195 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130301s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:34618 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000272902s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:44444 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000177201s
	I0612 15:03:43.449013   13752 command_runner.go:130] > [INFO] 10.244.0.3:35691 - 5 "PTR IN 1.192.23.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001307s
	I0612 15:03:43.449596   13752 command_runner.go:130] > [INFO] 10.244.1.2:51174 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110501s
	I0612 15:03:43.449596   13752 command_runner.go:130] > [INFO] 10.244.1.2:41925 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000207401s
	I0612 15:03:43.449596   13752 command_runner.go:130] > [INFO] 10.244.1.2:44306 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000736s
	I0612 15:03:43.449596   13752 command_runner.go:130] > [INFO] 10.244.1.2:46158 - 5 "PTR IN 1.192.23.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000547s
	I0612 15:03:43.449651   13752 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0612 15:03:43.449651   13752 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0612 15:03:43.452683   13752 logs.go:123] Gathering logs for kube-scheduler [6b021c195669] ...
	I0612 15:03:43.452683   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b021c195669"
	I0612 15:03:43.485388   13752 command_runner.go:130] ! I0612 21:39:26.474423       1 serving.go:380] Generated self-signed cert in-memory
	I0612 15:03:43.485786   13752 command_runner.go:130] ! W0612 21:39:28.263287       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0612 15:03:43.485857   13752 command_runner.go:130] ! W0612 21:39:28.263543       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:43.485911   13752 command_runner.go:130] ! W0612 21:39:28.263706       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0612 15:03:43.485968   13752 command_runner.go:130] ! W0612 21:39:28.263849       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0612 15:03:43.485968   13752 command_runner.go:130] ! I0612 21:39:28.303051       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0612 15:03:43.485968   13752 command_runner.go:130] ! I0612 21:39:28.305840       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:43.486017   13752 command_runner.go:130] ! I0612 21:39:28.310682       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0612 15:03:43.486017   13752 command_runner.go:130] ! I0612 21:39:28.312812       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0612 15:03:43.486017   13752 command_runner.go:130] ! I0612 21:39:28.313421       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0612 15:03:43.486017   13752 command_runner.go:130] ! I0612 21:39:28.313594       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 15:03:43.486083   13752 command_runner.go:130] ! W0612 21:39:28.336905       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:43.486083   13752 command_runner.go:130] ! E0612 21:39:28.337826       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:43.486162   13752 command_runner.go:130] ! W0612 21:39:28.338227       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0612 15:03:43.486202   13752 command_runner.go:130] ! E0612 21:39:28.338391       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0612 15:03:43.486202   13752 command_runner.go:130] ! W0612 21:39:28.338652       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0612 15:03:43.486202   13752 command_runner.go:130] ! E0612 21:39:28.338896       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0612 15:03:43.486202   13752 command_runner.go:130] ! W0612 21:39:28.339195       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0612 15:03:43.486397   13752 command_runner.go:130] ! E0612 21:39:28.339406       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0612 15:03:43.486397   13752 command_runner.go:130] ! W0612 21:39:28.339694       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0612 15:03:43.486476   13752 command_runner.go:130] ! E0612 21:39:28.339892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0612 15:03:43.486558   13752 command_runner.go:130] ! W0612 21:39:28.340188       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0612 15:03:43.486596   13752 command_runner.go:130] ! E0612 21:39:28.340362       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0612 15:03:43.486648   13752 command_runner.go:130] ! W0612 21:39:28.340697       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:43.486719   13752 command_runner.go:130] ! E0612 21:39:28.341129       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:43.486766   13752 command_runner.go:130] ! W0612 21:39:28.341447       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:43.486805   13752 command_runner.go:130] ! E0612 21:39:28.341664       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:43.486805   13752 command_runner.go:130] ! W0612 21:39:28.341989       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0612 15:03:43.486853   13752 command_runner.go:130] ! E0612 21:39:28.342229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0612 15:03:43.486907   13752 command_runner.go:130] ! W0612 21:39:28.342540       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:43.487016   13752 command_runner.go:130] ! E0612 21:39:28.344839       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:43.487133   13752 command_runner.go:130] ! W0612 21:39:28.345316       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0612 15:03:43.487186   13752 command_runner.go:130] ! E0612 21:39:28.347872       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0612 15:03:43.487261   13752 command_runner.go:130] ! W0612 21:39:28.345596       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! W0612 21:39:28.345651       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! W0612 21:39:28.345691       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! W0612 21:39:28.345823       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! E0612 21:39:28.348490       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! E0612 21:39:28.348742       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! E0612 21:39:28.349066       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! E0612 21:39:28.349147       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! W0612 21:39:29.192073       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! E0612 21:39:29.192126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! W0612 21:39:29.249000       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! E0612 21:39:29.249248       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! W0612 21:39:29.268880       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! E0612 21:39:29.268972       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! W0612 21:39:29.271696       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! E0612 21:39:29.271839       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! W0612 21:39:29.275489       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! E0612 21:39:29.275551       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! W0612 21:39:29.296739       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:43.487287   13752 command_runner.go:130] ! E0612 21:39:29.297145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:43.487892   13752 command_runner.go:130] ! W0612 21:39:29.433593       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0612 15:03:43.487892   13752 command_runner.go:130] ! E0612 21:39:29.433887       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0612 15:03:43.487967   13752 command_runner.go:130] ! W0612 21:39:29.471880       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0612 15:03:43.487967   13752 command_runner.go:130] ! E0612 21:39:29.471994       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0612 15:03:43.488066   13752 command_runner.go:130] ! W0612 21:39:29.482669       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:43.488105   13752 command_runner.go:130] ! E0612 21:39:29.483008       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:43.488105   13752 command_runner.go:130] ! W0612 21:39:29.569402       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0612 15:03:43.488105   13752 command_runner.go:130] ! E0612 21:39:29.571433       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0612 15:03:43.488172   13752 command_runner.go:130] ! W0612 21:39:29.677906       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0612 15:03:43.488252   13752 command_runner.go:130] ! E0612 21:39:29.677950       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0612 15:03:43.488314   13752 command_runner.go:130] ! W0612 21:39:29.687951       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0612 15:03:43.488353   13752 command_runner.go:130] ! E0612 21:39:29.688054       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0612 15:03:43.488353   13752 command_runner.go:130] ! W0612 21:39:29.780288       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0612 15:03:43.488353   13752 command_runner.go:130] ! E0612 21:39:29.780411       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0612 15:03:43.488353   13752 command_runner.go:130] ! W0612 21:39:29.832564       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0612 15:03:43.488353   13752 command_runner.go:130] ! E0612 21:39:29.832892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0612 15:03:43.488353   13752 command_runner.go:130] ! W0612 21:39:29.889591       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:43.488353   13752 command_runner.go:130] ! E0612 21:39:29.889868       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:43.488353   13752 command_runner.go:130] ! I0612 21:39:32.513980       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0612 15:03:43.488353   13752 command_runner.go:130] ! E0612 22:00:01.172050       1 run.go:74] "command failed" err="finished without leader elect"
	I0612 15:03:43.500094   13752 logs.go:123] Gathering logs for kube-controller-manager [7acc8ff0a931] ...
	I0612 15:03:43.500094   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7acc8ff0a931"
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:28.579013       1 serving.go:380] Generated self-signed cert in-memory
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:28.927149       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:28.927184       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:28.930688       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:28.932993       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:28.933167       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:28.933539       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:32.987820       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:32.988653       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:32.994458       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:32.995780       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:32.996873       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:33.005703       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:33.005720       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:33.006099       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:33.006120       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:33.011328       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:33.013199       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:33.013216       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0612 15:03:43.531221   13752 command_runner.go:130] ! W0612 22:02:33.045760       1 shared_informer.go:597] resyncPeriod 19h21m1.650821539s is smaller than resyncCheckPeriod 23h18m38.368150047s and the informer has already started. Changing it to 23h18m38.368150047s
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:33.046400       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:33.046742       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:33.047003       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:33.047066       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0612 15:03:43.531221   13752 command_runner.go:130] ! I0612 22:02:33.047091       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0612 15:03:43.531770   13752 command_runner.go:130] ! I0612 22:02:33.047150       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0612 15:03:43.531770   13752 command_runner.go:130] ! I0612 22:02:33.047175       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0612 15:03:43.531865   13752 command_runner.go:130] ! I0612 22:02:33.047875       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0612 15:03:43.531962   13752 command_runner.go:130] ! I0612 22:02:33.048961       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0612 15:03:43.531962   13752 command_runner.go:130] ! I0612 22:02:33.049070       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0612 15:03:43.532048   13752 command_runner.go:130] ! I0612 22:02:33.049108       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0612 15:03:43.532048   13752 command_runner.go:130] ! I0612 22:02:33.049132       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0612 15:03:43.532075   13752 command_runner.go:130] ! I0612 22:02:33.049173       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0612 15:03:43.532119   13752 command_runner.go:130] ! I0612 22:02:33.049188       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0612 15:03:43.532119   13752 command_runner.go:130] ! I0612 22:02:33.049203       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0612 15:03:43.532163   13752 command_runner.go:130] ! I0612 22:02:33.049218       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.049235       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.049307       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! W0612 22:02:33.049318       1 shared_informer.go:597] resyncPeriod 16h27m54.164006095s is smaller than resyncCheckPeriod 23h18m38.368150047s and the informer has already started. Changing it to 23h18m38.368150047s
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.049536       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.049616       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.049652       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.049852       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.049880       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.052188       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.075270       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.088124       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.088224       1 shared_informer.go:313] Waiting for caches to sync for job
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.088312       1 shared_informer.go:320] Caches are synced for tokens
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.092469       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.093016       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.093183       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.099173       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.099288       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.099302       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.099269       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.099467       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.102279       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.103692       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.103797       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.109335       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.109737       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.109801       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.109811       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.113018       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.114442       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.114573       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.118932       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.118955       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.118979       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:43.532191   13752 command_runner.go:130] ! I0612 22:02:33.119791       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0612 15:03:43.532715   13752 command_runner.go:130] ! I0612 22:02:33.121411       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0612 15:03:43.532715   13752 command_runner.go:130] ! I0612 22:02:33.119985       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:43.532715   13752 command_runner.go:130] ! I0612 22:02:33.122332       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0612 15:03:43.532767   13752 command_runner.go:130] ! I0612 22:02:33.122409       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0612 15:03:43.532767   13752 command_runner.go:130] ! I0612 22:02:33.122432       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:43.532817   13752 command_runner.go:130] ! I0612 22:02:33.122572       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0612 15:03:43.532848   13752 command_runner.go:130] ! I0612 22:02:33.122710       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0612 15:03:43.532848   13752 command_runner.go:130] ! I0612 22:02:33.122722       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0612 15:03:43.532881   13752 command_runner.go:130] ! I0612 22:02:33.122748       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:43.532911   13752 command_runner.go:130] ! I0612 22:02:33.132412       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0612 15:03:43.532911   13752 command_runner.go:130] ! I0612 22:02:33.132517       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0612 15:03:43.532939   13752 command_runner.go:130] ! I0612 22:02:33.132620       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0612 15:03:43.532939   13752 command_runner.go:130] ! I0612 22:02:33.132660       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0612 15:03:43.532939   13752 command_runner.go:130] ! I0612 22:02:33.132669       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0612 15:03:43.532983   13752 command_runner.go:130] ! I0612 22:02:33.139478       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0612 15:03:43.532983   13752 command_runner.go:130] ! I0612 22:02:33.139854       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0612 15:03:43.532983   13752 command_runner.go:130] ! I0612 22:02:33.140261       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0612 15:03:43.533021   13752 command_runner.go:130] ! I0612 22:02:33.169621       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0612 15:03:43.533021   13752 command_runner.go:130] ! I0612 22:02:33.169819       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0612 15:03:43.533058   13752 command_runner.go:130] ! I0612 22:02:33.169849       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0612 15:03:43.533058   13752 command_runner.go:130] ! I0612 22:02:33.170074       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0612 15:03:43.533093   13752 command_runner.go:130] ! I0612 22:02:33.173816       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0612 15:03:43.533093   13752 command_runner.go:130] ! I0612 22:02:33.174120       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0612 15:03:43.533135   13752 command_runner.go:130] ! I0612 22:02:33.174130       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0612 15:03:43.533135   13752 command_runner.go:130] ! I0612 22:02:33.184678       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0612 15:03:43.533135   13752 command_runner.go:130] ! I0612 22:02:33.186030       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0612 15:03:43.533172   13752 command_runner.go:130] ! I0612 22:02:33.192152       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0612 15:03:43.533209   13752 command_runner.go:130] ! I0612 22:02:33.192257       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0612 15:03:43.533244   13752 command_runner.go:130] ! I0612 22:02:33.192268       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0612 15:03:43.533244   13752 command_runner.go:130] ! I0612 22:02:33.194361       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0612 15:03:43.533244   13752 command_runner.go:130] ! I0612 22:02:33.194659       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0612 15:03:43.533288   13752 command_runner.go:130] ! I0612 22:02:33.194671       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0612 15:03:43.533288   13752 command_runner.go:130] ! I0612 22:02:33.200378       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0612 15:03:43.533324   13752 command_runner.go:130] ! I0612 22:02:33.200552       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0612 15:03:43.533324   13752 command_runner.go:130] ! I0612 22:02:33.200579       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0612 15:03:43.533361   13752 command_runner.go:130] ! I0612 22:02:33.203400       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0612 15:03:43.533361   13752 command_runner.go:130] ! I0612 22:02:33.203797       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0612 15:03:43.533361   13752 command_runner.go:130] ! I0612 22:02:33.203967       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0612 15:03:43.533396   13752 command_runner.go:130] ! I0612 22:02:33.207566       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0612 15:03:43.533396   13752 command_runner.go:130] ! I0612 22:02:33.207732       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0612 15:03:43.533396   13752 command_runner.go:130] ! I0612 22:02:33.207743       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0612 15:03:43.533458   13752 command_runner.go:130] ! I0612 22:02:33.207766       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0612 15:03:43.533458   13752 command_runner.go:130] ! I0612 22:02:33.214389       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0612 15:03:43.533498   13752 command_runner.go:130] ! I0612 22:02:33.214572       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0612 15:03:43.533498   13752 command_runner.go:130] ! I0612 22:02:33.214655       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0612 15:03:43.533498   13752 command_runner.go:130] ! I0612 22:02:33.220603       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0612 15:03:43.533548   13752 command_runner.go:130] ! I0612 22:02:33.221181       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0612 15:03:43.533548   13752 command_runner.go:130] ! I0612 22:02:33.222958       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0612 15:03:43.533548   13752 command_runner.go:130] ! E0612 22:02:33.228603       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0612 15:03:43.533596   13752 command_runner.go:130] ! I0612 22:02:33.228994       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0612 15:03:43.533596   13752 command_runner.go:130] ! I0612 22:02:33.253059       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0612 15:03:43.533650   13752 command_runner.go:130] ! I0612 22:02:33.253281       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0612 15:03:43.533685   13752 command_runner.go:130] ! I0612 22:02:33.253292       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0612 15:03:43.533685   13752 command_runner.go:130] ! I0612 22:02:33.264081       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0612 15:03:43.533685   13752 command_runner.go:130] ! I0612 22:02:33.266480       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0612 15:03:43.533726   13752 command_runner.go:130] ! I0612 22:02:33.266606       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0612 15:03:43.533766   13752 command_runner.go:130] ! I0612 22:02:33.266742       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0612 15:03:43.533766   13752 command_runner.go:130] ! I0612 22:02:33.380173       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0612 15:03:43.533766   13752 command_runner.go:130] ! I0612 22:02:33.380458       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0612 15:03:43.533808   13752 command_runner.go:130] ! I0612 22:02:33.380796       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0612 15:03:43.533808   13752 command_runner.go:130] ! I0612 22:02:33.398346       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0612 15:03:43.533847   13752 command_runner.go:130] ! I0612 22:02:33.401718       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0612 15:03:43.533847   13752 command_runner.go:130] ! I0612 22:02:33.401737       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0612 15:03:43.533883   13752 command_runner.go:130] ! I0612 22:02:33.495874       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0612 15:03:43.533883   13752 command_runner.go:130] ! I0612 22:02:33.496386       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0612 15:03:43.533922   13752 command_runner.go:130] ! I0612 22:02:33.498064       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0612 15:03:43.533957   13752 command_runner.go:130] ! I0612 22:02:33.698817       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0612 15:03:43.533957   13752 command_runner.go:130] ! I0612 22:02:33.699215       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0612 15:03:43.533957   13752 command_runner.go:130] ! I0612 22:02:33.699646       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0612 15:03:43.533997   13752 command_runner.go:130] ! I0612 22:02:33.744449       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0612 15:03:43.534054   13752 command_runner.go:130] ! I0612 22:02:33.744531       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0612 15:03:43.534054   13752 command_runner.go:130] ! I0612 22:02:33.744546       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0612 15:03:43.534054   13752 command_runner.go:130] ! E0612 22:02:33.807267       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0612 15:03:43.534092   13752 command_runner.go:130] ! I0612 22:02:33.807295       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0612 15:03:43.534126   13752 command_runner.go:130] ! I0612 22:02:33.856639       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0612 15:03:43.534126   13752 command_runner.go:130] ! I0612 22:02:33.857088       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0612 15:03:43.534164   13752 command_runner.go:130] ! I0612 22:02:33.857273       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0612 15:03:43.534164   13752 command_runner.go:130] ! I0612 22:02:33.894016       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0612 15:03:43.534198   13752 command_runner.go:130] ! I0612 22:02:33.896048       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0612 15:03:43.534198   13752 command_runner.go:130] ! I0612 22:02:33.896083       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0612 15:03:43.534236   13752 command_runner.go:130] ! I0612 22:02:33.950707       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0612 15:03:43.534236   13752 command_runner.go:130] ! I0612 22:02:33.950731       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0612 15:03:43.534270   13752 command_runner.go:130] ! I0612 22:02:33.950771       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0612 15:03:43.534308   13752 command_runner.go:130] ! I0612 22:02:33.950821       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0612 15:03:43.534308   13752 command_runner.go:130] ! I0612 22:02:33.950870       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0612 15:03:43.534342   13752 command_runner.go:130] ! I0612 22:02:33.995005       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0612 15:03:43.534342   13752 command_runner.go:130] ! I0612 22:02:33.995247       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0612 15:03:43.534380   13752 command_runner.go:130] ! I0612 22:02:44.062766       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0612 15:03:43.534380   13752 command_runner.go:130] ! I0612 22:02:44.063067       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0612 15:03:43.534419   13752 command_runner.go:130] ! I0612 22:02:44.063362       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0612 15:03:43.534419   13752 command_runner.go:130] ! I0612 22:02:44.063411       1 shared_informer.go:313] Waiting for caches to sync for node
	I0612 15:03:43.534457   13752 command_runner.go:130] ! I0612 22:02:44.068203       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0612 15:03:43.534457   13752 command_runner.go:130] ! I0612 22:02:44.068603       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0612 15:03:43.534457   13752 command_runner.go:130] ! I0612 22:02:44.068777       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0612 15:03:43.534497   13752 command_runner.go:130] ! I0612 22:02:44.071309       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0612 15:03:43.534497   13752 command_runner.go:130] ! I0612 22:02:44.071638       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0612 15:03:43.534535   13752 command_runner.go:130] ! I0612 22:02:44.071795       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0612 15:03:43.534535   13752 command_runner.go:130] ! I0612 22:02:44.080804       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0612 15:03:43.534575   13752 command_runner.go:130] ! I0612 22:02:44.097810       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:43.534575   13752 command_runner.go:130] ! I0612 22:02:44.100018       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0612 15:03:43.534575   13752 command_runner.go:130] ! I0612 22:02:44.100030       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0612 15:03:43.534613   13752 command_runner.go:130] ! I0612 22:02:44.102193       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000\" does not exist"
	I0612 15:03:43.534648   13752 command_runner.go:130] ! I0612 22:02:44.102337       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m02\" does not exist"
	I0612 15:03:43.534686   13752 command_runner.go:130] ! I0612 22:02:44.102640       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:43.534686   13752 command_runner.go:130] ! I0612 22:02:44.102796       1 shared_informer.go:320] Caches are synced for TTL
	I0612 15:03:43.534720   13752 command_runner.go:130] ! I0612 22:02:44.102925       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m03\" does not exist"
	I0612 15:03:43.534758   13752 command_runner.go:130] ! I0612 22:02:44.102986       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:43.534758   13752 command_runner.go:130] ! I0612 22:02:44.113771       1 shared_informer.go:320] Caches are synced for GC
	I0612 15:03:43.534758   13752 command_runner.go:130] ! I0612 22:02:44.115010       1 shared_informer.go:320] Caches are synced for endpoint
	I0612 15:03:43.534792   13752 command_runner.go:130] ! I0612 22:02:44.115463       1 shared_informer.go:320] Caches are synced for cronjob
	I0612 15:03:43.534830   13752 command_runner.go:130] ! I0612 22:02:44.119062       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0612 15:03:43.534830   13752 command_runner.go:130] ! I0612 22:02:44.121259       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0612 15:03:43.534830   13752 command_runner.go:130] ! I0612 22:02:44.124526       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0612 15:03:43.534830   13752 command_runner.go:130] ! I0612 22:02:44.124650       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0612 15:03:43.534870   13752 command_runner.go:130] ! I0612 22:02:44.124971       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0612 15:03:43.534908   13752 command_runner.go:130] ! I0612 22:02:44.126246       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0612 15:03:43.534908   13752 command_runner.go:130] ! I0612 22:02:44.133682       1 shared_informer.go:320] Caches are synced for taint
	I0612 15:03:43.534908   13752 command_runner.go:130] ! I0612 22:02:44.134026       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0612 15:03:43.534942   13752 command_runner.go:130] ! I0612 22:02:44.141044       1 shared_informer.go:320] Caches are synced for service account
	I0612 15:03:43.534942   13752 command_runner.go:130] ! I0612 22:02:44.145563       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0612 15:03:43.534980   13752 command_runner.go:130] ! I0612 22:02:44.158513       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0612 15:03:43.535015   13752 command_runner.go:130] ! I0612 22:02:44.162319       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000"
	I0612 15:03:43.535015   13752 command_runner.go:130] ! I0612 22:02:44.162613       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000-m02"
	I0612 15:03:43.535053   13752 command_runner.go:130] ! I0612 22:02:44.162653       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000-m03"
	I0612 15:03:43.535053   13752 command_runner.go:130] ! I0612 22:02:44.163186       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0612 15:03:43.535087   13752 command_runner.go:130] ! I0612 22:02:44.164074       1 shared_informer.go:320] Caches are synced for node
	I0612 15:03:43.535087   13752 command_runner.go:130] ! I0612 22:02:44.164451       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0612 15:03:43.535125   13752 command_runner.go:130] ! I0612 22:02:44.164672       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0612 15:03:43.535125   13752 command_runner.go:130] ! I0612 22:02:44.164769       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0612 15:03:43.535165   13752 command_runner.go:130] ! I0612 22:02:44.164780       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0612 15:03:43.535165   13752 command_runner.go:130] ! I0612 22:02:44.167842       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0612 15:03:43.535165   13752 command_runner.go:130] ! I0612 22:02:44.174384       1 shared_informer.go:320] Caches are synced for daemon sets
	I0612 15:03:43.535202   13752 command_runner.go:130] ! I0612 22:02:44.182521       1 shared_informer.go:320] Caches are synced for namespace
	I0612 15:03:43.535236   13752 command_runner.go:130] ! I0612 22:02:44.186460       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0612 15:03:43.535236   13752 command_runner.go:130] ! I0612 22:02:44.194992       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0612 15:03:43.535275   13752 command_runner.go:130] ! I0612 22:02:44.196327       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0612 15:03:43.535275   13752 command_runner.go:130] ! I0612 22:02:44.196530       1 shared_informer.go:320] Caches are synced for job
	I0612 15:03:43.535275   13752 command_runner.go:130] ! I0612 22:02:44.196665       1 shared_informer.go:320] Caches are synced for deployment
	I0612 15:03:43.535315   13752 command_runner.go:130] ! I0612 22:02:44.200768       1 shared_informer.go:320] Caches are synced for HPA
	I0612 15:03:43.535315   13752 command_runner.go:130] ! I0612 22:02:44.200988       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0612 15:03:43.535315   13752 command_runner.go:130] ! I0612 22:02:44.201846       1 shared_informer.go:320] Caches are synced for PV protection
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.207493       1 shared_informer.go:320] Caches are synced for crt configmap
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.228051       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="25.792655ms"
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.231633       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="89.306µs"
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.244808       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.644732ms"
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.246402       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.002µs"
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.297636       1 shared_informer.go:320] Caches are synced for PVC protection
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.304265       1 shared_informer.go:320] Caches are synced for stateful set
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.304486       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.311023       1 shared_informer.go:320] Caches are synced for disruption
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.350865       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.351039       1 shared_informer.go:320] Caches are synced for ephemeral
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.353535       1 shared_informer.go:320] Caches are synced for persistent volume
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.369296       1 shared_informer.go:320] Caches are synced for attach detach
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.372273       1 shared_informer.go:320] Caches are synced for expand
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.381442       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.821842       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.870923       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:02:44.871005       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:03:11.878868       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:03:24.254264       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.921834ms"
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:03:24.256639       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.601µs"
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:03:37.832133       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="82.001µs"
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:03:37.905221       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.518825ms"
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:03:37.905853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.201µs"
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:03:37.917312       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.821108ms"
	I0612 15:03:43.535353   13752 command_runner.go:130] ! I0612 22:03:37.917472       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.3µs"
	I0612 15:03:43.553154   13752 logs.go:123] Gathering logs for kube-controller-manager [685d167da53c] ...
	I0612 15:03:43.553154   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 685d167da53c"
	I0612 15:03:43.577027   13752 command_runner.go:130] ! I0612 21:39:26.275086       1 serving.go:380] Generated self-signed cert in-memory
	I0612 15:03:43.577027   13752 command_runner.go:130] ! I0612 21:39:26.758419       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0612 15:03:43.577027   13752 command_runner.go:130] ! I0612 21:39:26.759036       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:43.577027   13752 command_runner.go:130] ! I0612 21:39:26.761311       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0612 15:03:43.577027   13752 command_runner.go:130] ! I0612 21:39:26.761663       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0612 15:03:43.577027   13752 command_runner.go:130] ! I0612 21:39:26.762454       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0612 15:03:43.577027   13752 command_runner.go:130] ! I0612 21:39:26.762652       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 15:03:43.581906   13752 command_runner.go:130] ! I0612 21:39:31.260969       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0612 15:03:43.581906   13752 command_runner.go:130] ! I0612 21:39:31.261096       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0612 15:03:43.581906   13752 command_runner.go:130] ! E0612 21:39:31.316508       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0612 15:03:43.581906   13752 command_runner.go:130] ! I0612 21:39:31.316587       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0612 15:03:43.581906   13752 command_runner.go:130] ! I0612 21:39:31.342032       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0612 15:03:43.581906   13752 command_runner.go:130] ! I0612 21:39:31.342287       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0612 15:03:43.581906   13752 command_runner.go:130] ! I0612 21:39:31.342304       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0612 15:03:43.582044   13752 command_runner.go:130] ! I0612 21:39:31.362243       1 shared_informer.go:320] Caches are synced for tokens
	I0612 15:03:43.582044   13752 command_runner.go:130] ! I0612 21:39:31.399024       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0612 15:03:43.582044   13752 command_runner.go:130] ! I0612 21:39:31.399081       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0612 15:03:43.582044   13752 command_runner.go:130] ! I0612 21:39:31.399264       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0612 15:03:43.582044   13752 command_runner.go:130] ! I0612 21:39:31.443376       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0612 15:03:43.582044   13752 command_runner.go:130] ! I0612 21:39:31.443603       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0612 15:03:43.582044   13752 command_runner.go:130] ! I0612 21:39:31.443617       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0612 15:03:43.582044   13752 command_runner.go:130] ! I0612 21:39:31.480477       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0612 15:03:43.582044   13752 command_runner.go:130] ! I0612 21:39:31.480993       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0612 15:03:43.582174   13752 command_runner.go:130] ! I0612 21:39:31.481007       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0612 15:03:43.582174   13752 command_runner.go:130] ! I0612 21:39:31.523943       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0612 15:03:43.582174   13752 command_runner.go:130] ! I0612 21:39:31.524182       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0612 15:03:43.582174   13752 command_runner.go:130] ! I0612 21:39:31.524535       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0612 15:03:43.582174   13752 command_runner.go:130] ! I0612 21:39:31.524741       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0612 15:03:43.582174   13752 command_runner.go:130] ! I0612 21:39:31.553194       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0612 15:03:43.582174   13752 command_runner.go:130] ! I0612 21:39:31.554412       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0612 15:03:43.582174   13752 command_runner.go:130] ! I0612 21:39:31.556852       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0612 15:03:43.582174   13752 command_runner.go:130] ! I0612 21:39:31.560273       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0612 15:03:43.582285   13752 command_runner.go:130] ! I0612 21:39:31.560448       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0612 15:03:43.582285   13752 command_runner.go:130] ! I0612 21:39:31.561614       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0612 15:03:43.582285   13752 command_runner.go:130] ! I0612 21:39:31.561933       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0612 15:03:43.582285   13752 command_runner.go:130] ! I0612 21:39:31.593308       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0612 15:03:43.582285   13752 command_runner.go:130] ! I0612 21:39:31.593438       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0612 15:03:43.582285   13752 command_runner.go:130] ! I0612 21:39:31.593459       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0612 15:03:43.582285   13752 command_runner.go:130] ! I0612 21:39:31.593488       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0612 15:03:43.582406   13752 command_runner.go:130] ! I0612 21:39:31.593534       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0612 15:03:43.582406   13752 command_runner.go:130] ! I0612 21:39:31.593588       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0612 15:03:43.582406   13752 command_runner.go:130] ! I0612 21:39:31.593611       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0612 15:03:43.582406   13752 command_runner.go:130] ! I0612 21:39:31.593650       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0612 15:03:43.582530   13752 command_runner.go:130] ! I0612 21:39:31.593684       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0612 15:03:43.582530   13752 command_runner.go:130] ! I0612 21:39:31.593701       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0612 15:03:43.582594   13752 command_runner.go:130] ! I0612 21:39:31.593721       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0612 15:03:43.582594   13752 command_runner.go:130] ! I0612 21:39:31.593739       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0612 15:03:43.582636   13752 command_runner.go:130] ! I0612 21:39:31.593950       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0612 15:03:43.582636   13752 command_runner.go:130] ! I0612 21:39:31.594051       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0612 15:03:43.582636   13752 command_runner.go:130] ! I0612 21:39:31.594202       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0612 15:03:43.582636   13752 command_runner.go:130] ! I0612 21:39:31.594262       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0612 15:03:43.582636   13752 command_runner.go:130] ! I0612 21:39:31.594286       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0612 15:03:43.582636   13752 command_runner.go:130] ! I0612 21:39:31.594306       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0612 15:03:43.582737   13752 command_runner.go:130] ! I0612 21:39:31.594500       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0612 15:03:43.582737   13752 command_runner.go:130] ! I0612 21:39:31.594602       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0612 15:03:43.582737   13752 command_runner.go:130] ! I0612 21:39:31.594857       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0612 15:03:43.582737   13752 command_runner.go:130] ! I0612 21:39:31.594957       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0612 15:03:43.582737   13752 command_runner.go:130] ! I0612 21:39:31.595276       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0612 15:03:43.582737   13752 command_runner.go:130] ! I0612 21:39:31.595463       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0612 15:03:43.582737   13752 command_runner.go:130] ! I0612 21:39:31.605247       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0612 15:03:43.582879   13752 command_runner.go:130] ! I0612 21:39:31.605722       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0612 15:03:43.582879   13752 command_runner.go:130] ! I0612 21:39:31.607199       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0612 15:03:43.582879   13752 command_runner.go:130] ! I0612 21:39:31.668704       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0612 15:03:43.582879   13752 command_runner.go:130] ! I0612 21:39:31.669329       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0612 15:03:43.582879   13752 command_runner.go:130] ! I0612 21:39:31.669521       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0612 15:03:43.582952   13752 command_runner.go:130] ! I0612 21:39:31.820968       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0612 15:03:43.582952   13752 command_runner.go:130] ! I0612 21:39:31.821104       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0612 15:03:43.582952   13752 command_runner.go:130] ! I0612 21:39:31.821117       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0612 15:03:43.582952   13752 command_runner.go:130] ! I0612 21:39:31.973500       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0612 15:03:43.583035   13752 command_runner.go:130] ! I0612 21:39:31.973543       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0612 15:03:43.583035   13752 command_runner.go:130] ! I0612 21:39:31.975344       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0612 15:03:43.583035   13752 command_runner.go:130] ! I0612 21:39:31.975377       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0612 15:03:43.583035   13752 command_runner.go:130] ! I0612 21:39:32.163715       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0612 15:03:43.583035   13752 command_runner.go:130] ! I0612 21:39:32.163860       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0612 15:03:43.583035   13752 command_runner.go:130] ! I0612 21:39:32.320380       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0612 15:03:43.583035   13752 command_runner.go:130] ! I0612 21:39:32.320516       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0612 15:03:43.583141   13752 command_runner.go:130] ! I0612 21:39:32.320529       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0612 15:03:43.583141   13752 command_runner.go:130] ! I0612 21:39:32.468817       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0612 15:03:43.583141   13752 command_runner.go:130] ! I0612 21:39:32.468893       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0612 15:03:43.583141   13752 command_runner.go:130] ! I0612 21:39:32.636144       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0612 15:03:43.583141   13752 command_runner.go:130] ! I0612 21:39:32.636921       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0612 15:03:43.583141   13752 command_runner.go:130] ! I0612 21:39:32.637331       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0612 15:03:43.583141   13752 command_runner.go:130] ! I0612 21:39:32.775300       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0612 15:03:43.583141   13752 command_runner.go:130] ! I0612 21:39:32.776007       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0612 15:03:43.583141   13752 command_runner.go:130] ! I0612 21:39:32.778803       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0612 15:03:43.583141   13752 command_runner.go:130] ! I0612 21:39:32.920254       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0612 15:03:43.583252   13752 command_runner.go:130] ! I0612 21:39:32.920359       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0612 15:03:43.583252   13752 command_runner.go:130] ! I0612 21:39:32.920902       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0612 15:03:43.583252   13752 command_runner.go:130] ! I0612 21:39:33.069533       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0612 15:03:43.583252   13752 command_runner.go:130] ! I0612 21:39:33.069689       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0612 15:03:43.583252   13752 command_runner.go:130] ! I0612 21:39:33.069704       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0612 15:03:43.583252   13752 command_runner.go:130] ! I0612 21:39:33.069713       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0612 15:03:43.583252   13752 command_runner.go:130] ! I0612 21:39:33.115693       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0612 15:03:43.583414   13752 command_runner.go:130] ! I0612 21:39:33.115796       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0612 15:03:43.583414   13752 command_runner.go:130] ! I0612 21:39:33.115809       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0612 15:03:43.583414   13752 command_runner.go:130] ! I0612 21:39:33.116021       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0612 15:03:43.583414   13752 command_runner.go:130] ! I0612 21:39:33.116257       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0612 15:03:43.583414   13752 command_runner.go:130] ! I0612 21:39:33.116416       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0612 15:03:43.583414   13752 command_runner.go:130] ! I0612 21:39:33.169481       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0612 15:03:43.583414   13752 command_runner.go:130] ! I0612 21:39:33.169523       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0612 15:03:43.583529   13752 command_runner.go:130] ! I0612 21:39:33.169561       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:43.583529   13752 command_runner.go:130] ! I0612 21:39:33.170619       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0612 15:03:43.583529   13752 command_runner.go:130] ! I0612 21:39:33.170693       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0612 15:03:43.583529   13752 command_runner.go:130] ! I0612 21:39:33.170745       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:43.583529   13752 command_runner.go:130] ! I0612 21:39:33.171426       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0612 15:03:43.583529   13752 command_runner.go:130] ! I0612 21:39:33.171458       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0612 15:03:43.583529   13752 command_runner.go:130] ! I0612 21:39:33.171479       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:43.583645   13752 command_runner.go:130] ! I0612 21:39:33.172032       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0612 15:03:43.583645   13752 command_runner.go:130] ! I0612 21:39:33.172160       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0612 15:03:43.583645   13752 command_runner.go:130] ! I0612 21:39:33.172352       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0612 15:03:43.583645   13752 command_runner.go:130] ! I0612 21:39:33.172295       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:43.583768   13752 command_runner.go:130] ! I0612 21:39:43.229790       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0612 15:03:43.583768   13752 command_runner.go:130] ! I0612 21:39:43.230104       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0612 15:03:43.583768   13752 command_runner.go:130] ! I0612 21:39:43.230715       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0612 15:03:43.583768   13752 command_runner.go:130] ! I0612 21:39:43.230868       1 shared_informer.go:313] Waiting for caches to sync for node
	I0612 15:03:43.583832   13752 command_runner.go:130] ! E0612 21:39:43.246433       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0612 15:03:43.583832   13752 command_runner.go:130] ! I0612 21:39:43.246740       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0612 15:03:43.583832   13752 command_runner.go:130] ! I0612 21:39:43.246878       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0612 15:03:43.583832   13752 command_runner.go:130] ! I0612 21:39:43.247178       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0612 15:03:43.583832   13752 command_runner.go:130] ! I0612 21:39:43.259694       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0612 15:03:43.583923   13752 command_runner.go:130] ! I0612 21:39:43.260105       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0612 15:03:43.583923   13752 command_runner.go:130] ! I0612 21:39:43.260326       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0612 15:03:43.583923   13752 command_runner.go:130] ! I0612 21:39:43.287038       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0612 15:03:43.583923   13752 command_runner.go:130] ! I0612 21:39:43.287747       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0612 15:03:43.583923   13752 command_runner.go:130] ! I0612 21:39:43.289545       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0612 15:03:43.583923   13752 command_runner.go:130] ! I0612 21:39:43.296881       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0612 15:03:43.584025   13752 command_runner.go:130] ! I0612 21:39:43.297485       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0612 15:03:43.584025   13752 command_runner.go:130] ! I0612 21:39:43.297679       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0612 15:03:43.584025   13752 command_runner.go:130] ! I0612 21:39:43.315673       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0612 15:03:43.584025   13752 command_runner.go:130] ! I0612 21:39:43.316362       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0612 15:03:43.584025   13752 command_runner.go:130] ! I0612 21:39:43.316724       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0612 15:03:43.584025   13752 command_runner.go:130] ! I0612 21:39:43.331329       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0612 15:03:43.584025   13752 command_runner.go:130] ! I0612 21:39:43.331610       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0612 15:03:43.584025   13752 command_runner.go:130] ! I0612 21:39:43.331966       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0612 15:03:43.584139   13752 command_runner.go:130] ! I0612 21:39:43.358081       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0612 15:03:43.584139   13752 command_runner.go:130] ! I0612 21:39:43.358485       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0612 15:03:43.584139   13752 command_runner.go:130] ! I0612 21:39:43.358595       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0612 15:03:43.584139   13752 command_runner.go:130] ! I0612 21:39:43.358609       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0612 15:03:43.584139   13752 command_runner.go:130] ! I0612 21:39:43.373221       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0612 15:03:43.584139   13752 command_runner.go:130] ! I0612 21:39:43.373371       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0612 15:03:43.584139   13752 command_runner.go:130] ! I0612 21:39:43.373388       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0612 15:03:43.584139   13752 command_runner.go:130] ! I0612 21:39:43.386049       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0612 15:03:43.584139   13752 command_runner.go:130] ! I0612 21:39:43.386265       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0612 15:03:43.584264   13752 command_runner.go:130] ! I0612 21:39:43.387457       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0612 15:03:43.584264   13752 command_runner.go:130] ! I0612 21:39:43.473855       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0612 15:03:43.584264   13752 command_runner.go:130] ! I0612 21:39:43.474115       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0612 15:03:43.584264   13752 command_runner.go:130] ! I0612 21:39:43.474421       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0612 15:03:43.584264   13752 command_runner.go:130] ! I0612 21:39:43.622457       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0612 15:03:43.584264   13752 command_runner.go:130] ! I0612 21:39:43.622831       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0612 15:03:43.584264   13752 command_runner.go:130] ! I0612 21:39:43.622950       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0612 15:03:43.584378   13752 command_runner.go:130] ! I0612 21:39:43.776632       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0612 15:03:43.584378   13752 command_runner.go:130] ! I0612 21:39:43.777149       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0612 15:03:43.584378   13752 command_runner.go:130] ! I0612 21:39:43.777203       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0612 15:03:43.584378   13752 command_runner.go:130] ! I0612 21:39:43.923199       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0612 15:03:43.584431   13752 command_runner.go:130] ! I0612 21:39:43.923416       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0612 15:03:43.584431   13752 command_runner.go:130] ! I0612 21:39:43.923557       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0612 15:03:43.584431   13752 command_runner.go:130] ! I0612 21:39:44.219008       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0612 15:03:43.584431   13752 command_runner.go:130] ! I0612 21:39:44.219041       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0612 15:03:43.584431   13752 command_runner.go:130] ! I0612 21:39:44.219093       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0612 15:03:43.584431   13752 command_runner.go:130] ! I0612 21:39:44.219104       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0612 15:03:43.584431   13752 command_runner.go:130] ! I0612 21:39:44.375322       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0612 15:03:43.584431   13752 command_runner.go:130] ! I0612 21:39:44.375879       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0612 15:03:43.584533   13752 command_runner.go:130] ! I0612 21:39:44.375896       1 shared_informer.go:313] Waiting for caches to sync for job
	I0612 15:03:43.584533   13752 command_runner.go:130] ! I0612 21:39:44.419335       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0612 15:03:43.584533   13752 command_runner.go:130] ! I0612 21:39:44.419357       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0612 15:03:43.584533   13752 command_runner.go:130] ! I0612 21:39:44.419672       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0612 15:03:43.584596   13752 command_runner.go:130] ! I0612 21:39:44.435364       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0612 15:03:43.584596   13752 command_runner.go:130] ! I0612 21:39:44.441191       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000\" does not exist"
	I0612 15:03:43.584596   13752 command_runner.go:130] ! I0612 21:39:44.456985       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0612 15:03:43.584596   13752 command_runner.go:130] ! I0612 21:39:44.457052       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0612 15:03:43.584596   13752 command_runner.go:130] ! I0612 21:39:44.460648       1 shared_informer.go:320] Caches are synced for GC
	I0612 15:03:43.584596   13752 command_runner.go:130] ! I0612 21:39:44.463138       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0612 15:03:43.584596   13752 command_runner.go:130] ! I0612 21:39:44.469825       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0612 15:03:43.584596   13752 command_runner.go:130] ! I0612 21:39:44.469846       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0612 15:03:43.584705   13752 command_runner.go:130] ! I0612 21:39:44.469856       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0612 15:03:43.584705   13752 command_runner.go:130] ! I0612 21:39:44.471608       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0612 15:03:43.584705   13752 command_runner.go:130] ! I0612 21:39:44.471748       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0612 15:03:43.584705   13752 command_runner.go:130] ! I0612 21:39:44.472789       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0612 15:03:43.584705   13752 command_runner.go:130] ! I0612 21:39:44.474041       1 shared_informer.go:320] Caches are synced for TTL
	I0612 15:03:43.584705   13752 command_runner.go:130] ! I0612 21:39:44.475483       1 shared_informer.go:320] Caches are synced for PVC protection
	I0612 15:03:43.584705   13752 command_runner.go:130] ! I0612 21:39:44.475505       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0612 15:03:43.584705   13752 command_runner.go:130] ! I0612 21:39:44.476080       1 shared_informer.go:320] Caches are synced for job
	I0612 15:03:43.584705   13752 command_runner.go:130] ! I0612 21:39:44.479252       1 shared_informer.go:320] Caches are synced for ephemeral
	I0612 15:03:43.584812   13752 command_runner.go:130] ! I0612 21:39:44.481788       1 shared_informer.go:320] Caches are synced for service account
	I0612 15:03:43.584812   13752 command_runner.go:130] ! I0612 21:39:44.488300       1 shared_informer.go:320] Caches are synced for persistent volume
	I0612 15:03:43.584812   13752 command_runner.go:130] ! I0612 21:39:44.491059       1 shared_informer.go:320] Caches are synced for namespace
	I0612 15:03:43.584812   13752 command_runner.go:130] ! I0612 21:39:44.499063       1 shared_informer.go:320] Caches are synced for cronjob
	I0612 15:03:43.584812   13752 command_runner.go:130] ! I0612 21:39:44.500304       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0612 15:03:43.584812   13752 command_runner.go:130] ! I0612 21:39:44.507471       1 shared_informer.go:320] Caches are synced for daemon sets
	I0612 15:03:43.584812   13752 command_runner.go:130] ! I0612 21:39:44.525355       1 shared_informer.go:320] Caches are synced for taint
	I0612 15:03:43.584901   13752 command_runner.go:130] ! I0612 21:39:44.525889       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0612 15:03:43.584901   13752 command_runner.go:130] ! I0612 21:39:44.526177       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000"
	I0612 15:03:43.584901   13752 command_runner.go:130] ! I0612 21:39:44.526390       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0612 15:03:43.584901   13752 command_runner.go:130] ! I0612 21:39:44.526550       1 shared_informer.go:320] Caches are synced for HPA
	I0612 15:03:43.584980   13752 command_runner.go:130] ! I0612 21:39:44.526951       1 shared_informer.go:320] Caches are synced for stateful set
	I0612 15:03:43.584980   13752 command_runner.go:130] ! I0612 21:39:44.527038       1 shared_informer.go:320] Caches are synced for deployment
	I0612 15:03:43.584980   13752 command_runner.go:130] ! I0612 21:39:44.528601       1 shared_informer.go:320] Caches are synced for PV protection
	I0612 15:03:43.584980   13752 command_runner.go:130] ! I0612 21:39:44.528834       1 shared_informer.go:320] Caches are synced for crt configmap
	I0612 15:03:43.584980   13752 command_runner.go:130] ! I0612 21:39:44.531261       1 shared_informer.go:320] Caches are synced for node
	I0612 15:03:43.585061   13752 command_runner.go:130] ! I0612 21:39:44.531462       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0612 15:03:43.585110   13752 command_runner.go:130] ! I0612 21:39:44.531679       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0612 15:03:43.585217   13752 command_runner.go:130] ! I0612 21:39:44.531942       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0612 15:03:43.585247   13752 command_runner.go:130] ! I0612 21:39:44.532097       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0612 15:03:43.585247   13752 command_runner.go:130] ! I0612 21:39:44.532523       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0612 15:03:43.585247   13752 command_runner.go:130] ! I0612 21:39:44.537873       1 shared_informer.go:320] Caches are synced for expand
	I0612 15:03:43.585247   13752 command_runner.go:130] ! I0612 21:39:44.543447       1 shared_informer.go:320] Caches are synced for attach detach
	I0612 15:03:43.585291   13752 command_runner.go:130] ! I0612 21:39:44.564610       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0612 15:03:43.585329   13752 command_runner.go:130] ! I0612 21:39:44.568950       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025000" podCIDRs=["10.244.0.0/24"]
	I0612 15:03:43.585329   13752 command_runner.go:130] ! I0612 21:39:44.621264       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0612 15:03:43.585329   13752 command_runner.go:130] ! I0612 21:39:44.644803       1 shared_informer.go:320] Caches are synced for endpoint
	I0612 15:03:43.585388   13752 command_runner.go:130] ! I0612 21:39:44.677466       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0612 15:03:43.585421   13752 command_runner.go:130] ! I0612 21:39:44.696400       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 15:03:43.585421   13752 command_runner.go:130] ! I0612 21:39:44.723303       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0612 15:03:43.585421   13752 command_runner.go:130] ! I0612 21:39:44.735837       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 15:03:43.585421   13752 command_runner.go:130] ! I0612 21:39:44.758870       1 shared_informer.go:320] Caches are synced for disruption
	I0612 15:03:43.585472   13752 command_runner.go:130] ! I0612 21:39:45.157877       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 15:03:43.585472   13752 command_runner.go:130] ! I0612 21:39:45.226557       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 15:03:43.585472   13752 command_runner.go:130] ! I0612 21:39:45.226973       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0612 15:03:43.585472   13752 command_runner.go:130] ! I0612 21:39:45.795416       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="243.746414ms"
	I0612 15:03:43.585472   13752 command_runner.go:130] ! I0612 21:39:45.868449       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.90937ms"
	I0612 15:03:43.585607   13752 command_runner.go:130] ! I0612 21:39:45.868845       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="122.402µs"
	I0612 15:03:43.585629   13752 command_runner.go:130] ! I0612 21:39:45.869382       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="206.903µs"
	I0612 15:03:43.585629   13752 command_runner.go:130] ! I0612 21:39:45.905402       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="386.807µs"
	I0612 15:03:43.585629   13752 command_runner.go:130] ! I0612 21:39:46.349409       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="105.452815ms"
	I0612 15:03:43.585629   13752 command_runner.go:130] ! I0612 21:39:46.386321       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.301621ms"
	I0612 15:03:43.585720   13752 command_runner.go:130] ! I0612 21:39:46.386974       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="616.309µs"
	I0612 15:03:43.585746   13752 command_runner.go:130] ! I0612 21:39:56.441072       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="366.601µs"
	I0612 15:03:43.585785   13752 command_runner.go:130] ! I0612 21:39:56.465727       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.4µs"
	I0612 15:03:43.585785   13752 command_runner.go:130] ! I0612 21:39:57.870560       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="68.5µs"
	I0612 15:03:43.585824   13752 command_runner.go:130] ! I0612 21:39:58.874445       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.448319ms"
	I0612 15:03:43.585854   13752 command_runner.go:130] ! I0612 21:39:58.875168       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="103.901µs"
	I0612 15:03:43.585903   13752 command_runner.go:130] ! I0612 21:39:59.529553       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0612 15:03:43.585903   13752 command_runner.go:130] ! I0612 21:42:39.169243       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m02\" does not exist"
	I0612 15:03:43.585903   13752 command_runner.go:130] ! I0612 21:42:39.188142       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025000-m02" podCIDRs=["10.244.1.0/24"]
	I0612 15:03:43.585980   13752 command_runner.go:130] ! I0612 21:42:39.563565       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000-m02"
	I0612 15:03:43.585980   13752 command_runner.go:130] ! I0612 21:42:58.063730       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:43.586006   13752 command_runner.go:130] ! I0612 21:43:24.138579       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.052538ms"
	I0612 15:03:43.586006   13752 command_runner.go:130] ! I0612 21:43:24.156190       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.434267ms"
	I0612 15:03:43.586079   13752 command_runner.go:130] ! I0612 21:43:24.156677       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.099µs"
	I0612 15:03:43.586079   13752 command_runner.go:130] ! I0612 21:43:24.183391       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.299µs"
	I0612 15:03:43.586079   13752 command_runner.go:130] ! I0612 21:43:26.908415       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.051448ms"
	I0612 15:03:43.586162   13752 command_runner.go:130] ! I0612 21:43:26.908853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34µs"
	I0612 15:03:43.586162   13752 command_runner.go:130] ! I0612 21:43:27.296932       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.474956ms"
	I0612 15:03:43.586162   13752 command_runner.go:130] ! I0612 21:43:27.304566       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.488944ms"
	I0612 15:03:43.586327   13752 command_runner.go:130] ! I0612 21:47:16.485552       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:43.586327   13752 command_runner.go:130] ! I0612 21:47:16.486568       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m03\" does not exist"
	I0612 15:03:43.586327   13752 command_runner.go:130] ! I0612 21:47:16.503987       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025000-m03" podCIDRs=["10.244.2.0/24"]
	I0612 15:03:43.586397   13752 command_runner.go:130] ! I0612 21:47:19.629018       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000-m03"
	I0612 15:03:43.586397   13752 command_runner.go:130] ! I0612 21:47:35.032365       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:43.586459   13752 command_runner.go:130] ! I0612 21:55:19.767980       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:43.586459   13752 command_runner.go:130] ! I0612 21:57:52.374240       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:43.586459   13752 command_runner.go:130] ! I0612 21:57:58.774442       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m03\" does not exist"
	I0612 15:03:43.586459   13752 command_runner.go:130] ! I0612 21:57:58.774588       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:43.586459   13752 command_runner.go:130] ! I0612 21:57:58.809041       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025000-m03" podCIDRs=["10.244.3.0/24"]
	I0612 15:03:43.586459   13752 command_runner.go:130] ! I0612 21:58:06.126407       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:43.586459   13752 command_runner.go:130] ! I0612 21:59:45.222238       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:43.601714   13752 logs.go:123] Gathering logs for kindnet [cccfd1e9fef5] ...
	I0612 15:03:43.601714   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccfd1e9fef5"
	I0612 15:03:43.634705   13752 command_runner.go:130] ! I0612 22:02:33.621070       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0612 15:03:43.635316   13752 command_runner.go:130] ! I0612 22:02:33.621857       1 main.go:107] hostIP = 172.23.200.184
	I0612 15:03:43.635316   13752 command_runner.go:130] ! podIP = 172.23.200.184
	I0612 15:03:43.635378   13752 command_runner.go:130] ! I0612 22:02:33.622055       1 main.go:116] setting mtu 1500 for CNI 
	I0612 15:03:43.635378   13752 command_runner.go:130] ! I0612 22:02:33.622069       1 main.go:146] kindnetd IP family: "ipv4"
	I0612 15:03:43.635378   13752 command_runner.go:130] ! I0612 22:02:33.622082       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0612 15:03:43.635378   13752 command_runner.go:130] ! I0612 22:03:03.928722       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0612 15:03:43.635378   13752 command_runner.go:130] ! I0612 22:03:03.948068       1 main.go:223] Handling node with IPs: map[172.23.200.184:{}]
	I0612 15:03:43.635455   13752 command_runner.go:130] ! I0612 22:03:03.948207       1 main.go:227] handling current node
	I0612 15:03:43.635455   13752 command_runner.go:130] ! I0612 22:03:04.015006       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:43.635511   13752 command_runner.go:130] ! I0612 22:03:04.015280       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:43.635511   13752 command_runner.go:130] ! I0612 22:03:04.015617       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.23.196.105 Flags: [] Table: 0} 
	I0612 15:03:43.635604   13752 command_runner.go:130] ! I0612 22:03:04.015960       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:43.635604   13752 command_runner.go:130] ! I0612 22:03:04.015976       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:43.635604   13752 command_runner.go:130] ! I0612 22:03:04.016053       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.23.206.72 Flags: [] Table: 0} 
	I0612 15:03:43.635649   13752 command_runner.go:130] ! I0612 22:03:14.032118       1 main.go:223] Handling node with IPs: map[172.23.200.184:{}]
	I0612 15:03:43.635649   13752 command_runner.go:130] ! I0612 22:03:14.032228       1 main.go:227] handling current node
	I0612 15:03:43.635649   13752 command_runner.go:130] ! I0612 22:03:14.032243       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:43.635696   13752 command_runner.go:130] ! I0612 22:03:14.032255       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:43.635696   13752 command_runner.go:130] ! I0612 22:03:14.032739       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:43.635740   13752 command_runner.go:130] ! I0612 22:03:14.032836       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:43.635740   13752 command_runner.go:130] ! I0612 22:03:24.045393       1 main.go:223] Handling node with IPs: map[172.23.200.184:{}]
	I0612 15:03:43.635740   13752 command_runner.go:130] ! I0612 22:03:24.045492       1 main.go:227] handling current node
	I0612 15:03:43.635791   13752 command_runner.go:130] ! I0612 22:03:24.045504       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:43.635791   13752 command_runner.go:130] ! I0612 22:03:24.045510       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:43.635791   13752 command_runner.go:130] ! I0612 22:03:24.045926       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:43.635850   13752 command_runner.go:130] ! I0612 22:03:24.045941       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:43.635896   13752 command_runner.go:130] ! I0612 22:03:34.052186       1 main.go:223] Handling node with IPs: map[172.23.200.184:{}]
	I0612 15:03:43.635896   13752 command_runner.go:130] ! I0612 22:03:34.052288       1 main.go:227] handling current node
	I0612 15:03:43.635896   13752 command_runner.go:130] ! I0612 22:03:34.052302       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:43.635935   13752 command_runner.go:130] ! I0612 22:03:34.052309       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:43.635935   13752 command_runner.go:130] ! I0612 22:03:34.052423       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:43.635991   13752 command_runner.go:130] ! I0612 22:03:34.052452       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:43.639582   13752 logs.go:123] Gathering logs for kubelet ...
	I0612 15:03:43.639625   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 15:03:43.673570   13752 command_runner.go:130] > Jun 12 22:02:21 multinode-025000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0612 15:03:43.674608   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1381]: I0612 22:02:22.063456    1381 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0612 15:03:43.674608   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1381]: I0612 22:02:22.064093    1381 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:43.674703   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1381]: I0612 22:02:22.064387    1381 server.go:927] "Client rotation is on, will bootstrap in background"
	I0612 15:03:43.674703   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1381]: E0612 22:02:22.065868    1381 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0612 15:03:43.674703   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0612 15:03:43.674783   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0612 15:03:43.674783   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0612 15:03:43.674783   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0612 15:03:43.674783   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0612 15:03:43.674783   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1437]: I0612 22:02:22.789327    1437 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0612 15:03:43.674863   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1437]: I0612 22:02:22.789465    1437 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:43.674863   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1437]: I0612 22:02:22.790480    1437 server.go:927] "Client rotation is on, will bootstrap in background"
	I0612 15:03:43.674863   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1437]: E0612 22:02:22.790564    1437 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0612 15:03:43.674863   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0612 15:03:43.674963   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0612 15:03:43.674963   13752 command_runner.go:130] > Jun 12 22:02:23 multinode-025000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0612 15:03:43.674963   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0612 15:03:43.674963   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.414046    1517 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0612 15:03:43.675044   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.414147    1517 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:43.675044   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.414632    1517 server.go:927] "Client rotation is on, will bootstrap in background"
	I0612 15:03:43.675044   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.416608    1517 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0612 15:03:43.675044   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.437750    1517 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0612 15:03:43.675044   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.458497    1517 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0612 15:03:43.675120   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.458849    1517 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0612 15:03:43.675120   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.460038    1517 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0612 15:03:43.675200   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.460095    1517 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-025000","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0612 15:03:43.675277   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.464057    1517 topology_manager.go:138] "Creating topology manager with none policy"
	I0612 15:03:43.675277   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.464080    1517 container_manager_linux.go:301] "Creating device plugin manager"
	I0612 15:03:43.675277   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.464924    1517 state_mem.go:36] "Initialized new in-memory state store"
	I0612 15:03:43.675277   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.466519    1517 kubelet.go:400] "Attempting to sync node with API server"
	I0612 15:03:43.675277   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.466546    1517 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0612 15:03:43.675368   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.466613    1517 kubelet.go:312] "Adding apiserver pod source"
	I0612 15:03:43.675368   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.467352    1517 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0612 15:03:43.675368   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: W0612 22:02:25.471384    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-025000&limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:43.675368   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.471502    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-025000&limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:43.675454   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.471869    1517 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.1.4" apiVersion="v1"
	I0612 15:03:43.675454   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.477415    1517 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0612 15:03:43.675454   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: W0612 22:02:25.478424    1517 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0612 15:03:43.675534   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.480523    1517 server.go:1264] "Started kubelet"
	I0612 15:03:43.675534   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: W0612 22:02:25.481568    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:43.675534   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.481666    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:43.675611   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.481865    1517 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0612 15:03:43.675611   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.482789    1517 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0612 15:03:43.675611   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.485497    1517 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0612 15:03:43.675720   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.490040    1517 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.23.200.184:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-025000.17d860d995e00c7b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-025000,UID:multinode-025000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-025000,},FirstTimestamp:2024-06-12 22:02:25.480502395 +0000 UTC m=+0.149388345,LastTimestamp:2024-06-12 22:02:25.480502395 +0000 UTC m=+0.149388345,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-0
25000,}"
	I0612 15:03:43.675764   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.493219    1517 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0612 15:03:43.675764   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.495119    1517 server.go:455] "Adding debug handlers to kubelet server"
	I0612 15:03:43.675764   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.496095    1517 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0612 15:03:43.675764   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.498560    1517 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0612 15:03:43.675764   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.501388    1517 factory.go:221] Registration of the systemd container factory successfully
	I0612 15:03:43.675847   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.501556    1517 factory.go:219] Registration of the crio container factory failed: Get "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)crio%!F(MISSING)crio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0612 15:03:43.675847   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.501657    1517 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0612 15:03:43.675847   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: W0612 22:02:25.510641    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:43.675931   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.510706    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:43.675931   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.521028    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-025000?timeout=10s\": dial tcp 172.23.200.184:8443: connect: connection refused" interval="200ms"
	I0612 15:03:43.676027   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.554579    1517 reconciler.go:26] "Reconciler: start to sync state"
	I0612 15:03:43.676027   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.594809    1517 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0612 15:03:43.676027   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.595077    1517 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0612 15:03:43.676027   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.595178    1517 state_mem.go:36] "Initialized new in-memory state store"
	I0612 15:03:43.676083   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.598081    1517 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0612 15:03:43.676083   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.598418    1517 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0612 15:03:43.676129   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.598595    1517 policy_none.go:49] "None policy: Start"
	I0612 15:03:43.676129   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.600760    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-025000"
	I0612 15:03:43.676129   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.602144    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.200.184:8443: connect: connection refused" node="multinode-025000"
	I0612 15:03:43.676129   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.610755    1517 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0612 15:03:43.676205   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.610783    1517 state_mem.go:35] "Initializing new in-memory state store"
	I0612 15:03:43.676205   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.610843    1517 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0612 15:03:43.676205   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.611758    1517 state_mem.go:75] "Updated machine memory state"
	I0612 15:03:43.676205   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.613995    1517 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0612 15:03:43.676281   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.614216    1517 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0612 15:03:43.676281   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.615027    1517 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0612 15:03:43.676281   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.615636    1517 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0612 15:03:43.676281   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.615685    1517 kubelet.go:2337] "Starting kubelet main sync loop"
	I0612 15:03:43.676355   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.615730    1517 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0612 15:03:43.676355   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.616221    1517 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0612 15:03:43.676355   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: W0612 22:02:25.632621    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:43.676355   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.632711    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:43.676435   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.634150    1517 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-025000\" not found"
	I0612 15:03:43.676435   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.644874    1517 iptables.go:577] "Could not set up iptables canary" err=<
	I0612 15:03:43.676435   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0612 15:03:43.676528   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0612 15:03:43.676528   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0612 15:03:43.676528   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0612 15:03:43.676601   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.717070    1517 topology_manager.go:215] "Topology Admit Handler" podUID="d6071cd4356268889f798790dc93ce06" podNamespace="kube-system" podName="kube-apiserver-multinode-025000"
	I0612 15:03:43.676601   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.719714    1517 topology_manager.go:215] "Topology Admit Handler" podUID="88de11d8b1aaec126153d44e87c4b5dd" podNamespace="kube-system" podName="kube-controller-manager-multinode-025000"
	I0612 15:03:43.676601   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.720740    1517 topology_manager.go:215] "Topology Admit Handler" podUID="de62e7fd7d0feea82620e745032c1a67" podNamespace="kube-system" podName="kube-scheduler-multinode-025000"
	I0612 15:03:43.676684   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.722295    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-025000?timeout=10s\": dial tcp 172.23.200.184:8443: connect: connection refused" interval="400ms"
	I0612 15:03:43.676684   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.724629    1517 topology_manager.go:215] "Topology Admit Handler" podUID="7b6b5637642f3d915c0db1461c7074e6" podNamespace="kube-system" podName="etcd-multinode-025000"
	I0612 15:03:43.676760   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.725657    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fad98f611536b15941d0f49c694b6b6c39318bca8a66620735a88a81a12d3610"
	I0612 15:03:43.676760   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.725708    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb4351fab502e49592d49234119b810b53c5916eaf732d4ba148b3ad1eed4e6a"
	I0612 15:03:43.676760   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.725720    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b9e051df48486e732da2c72bf2d0e3ec93cf8774632ecedd8825e656ba04a93"
	I0612 15:03:43.676836   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.725728    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2784305b1d5e9a088f0b73ff004b2d9eca305d397de3d7b9912638323d7c66b2"
	I0612 15:03:43.676836   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.725737    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40443305b24f54fea9235d98bfb16f2d550b8914bfa46c0592b5c24be1ad5569"
	I0612 15:03:43.676836   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.736677    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9933fdc9ca72b65b57e5b4b996215763431b87f18af45fdc8195252497e1d9a"
	I0612 15:03:43.676912   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.760928    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="894c58e9fe752e78b8e86cbbaabc1b6cc78ebcce37e4fc0bf1d838420f80a94d"
	I0612 15:03:43.676912   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.777475    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84a9b747663ca262bb35bb462ba83da0c104aee08928bd92a44297ee225d4c27"
	I0612 15:03:43.676912   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.794474    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92f2d5f19e95ea2d1cfe140159a55c94f5d809c3b67661196b1e285ac389537f"
	I0612 15:03:43.676912   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.803790    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-025000"
	I0612 15:03:43.676987   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.804820    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.200.184:8443: connect: connection refused" node="multinode-025000"
	I0612 15:03:43.676987   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885533    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/88de11d8b1aaec126153d44e87c4b5dd-ca-certs\") pod \"kube-controller-manager-multinode-025000\" (UID: \"88de11d8b1aaec126153d44e87c4b5dd\") " pod="kube-system/kube-controller-manager-multinode-025000"
	I0612 15:03:43.677069   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885705    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d6071cd4356268889f798790dc93ce06-ca-certs\") pod \"kube-apiserver-multinode-025000\" (UID: \"d6071cd4356268889f798790dc93ce06\") " pod="kube-system/kube-apiserver-multinode-025000"
	I0612 15:03:43.677069   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885746    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d6071cd4356268889f798790dc93ce06-k8s-certs\") pod \"kube-apiserver-multinode-025000\" (UID: \"d6071cd4356268889f798790dc93ce06\") " pod="kube-system/kube-apiserver-multinode-025000"
	I0612 15:03:43.677233   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885768    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/88de11d8b1aaec126153d44e87c4b5dd-k8s-certs\") pod \"kube-controller-manager-multinode-025000\" (UID: \"88de11d8b1aaec126153d44e87c4b5dd\") " pod="kube-system/kube-controller-manager-multinode-025000"
	I0612 15:03:43.677335   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885803    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/88de11d8b1aaec126153d44e87c4b5dd-kubeconfig\") pod \"kube-controller-manager-multinode-025000\" (UID: \"88de11d8b1aaec126153d44e87c4b5dd\") " pod="kube-system/kube-controller-manager-multinode-025000"
	I0612 15:03:43.677335   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885844    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/88de11d8b1aaec126153d44e87c4b5dd-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-025000\" (UID: \"88de11d8b1aaec126153d44e87c4b5dd\") " pod="kube-system/kube-controller-manager-multinode-025000"
	I0612 15:03:43.677421   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885869    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/de62e7fd7d0feea82620e745032c1a67-kubeconfig\") pod \"kube-scheduler-multinode-025000\" (UID: \"de62e7fd7d0feea82620e745032c1a67\") " pod="kube-system/kube-scheduler-multinode-025000"
	I0612 15:03:43.677421   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885941    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/7b6b5637642f3d915c0db1461c7074e6-etcd-certs\") pod \"etcd-multinode-025000\" (UID: \"7b6b5637642f3d915c0db1461c7074e6\") " pod="kube-system/etcd-multinode-025000"
	I0612 15:03:43.677512   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885970    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/7b6b5637642f3d915c0db1461c7074e6-etcd-data\") pod \"etcd-multinode-025000\" (UID: \"7b6b5637642f3d915c0db1461c7074e6\") " pod="kube-system/etcd-multinode-025000"
	I0612 15:03:43.677512   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885997    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d6071cd4356268889f798790dc93ce06-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-025000\" (UID: \"d6071cd4356268889f798790dc93ce06\") " pod="kube-system/kube-apiserver-multinode-025000"
	I0612 15:03:43.677639   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.886023    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/88de11d8b1aaec126153d44e87c4b5dd-flexvolume-dir\") pod \"kube-controller-manager-multinode-025000\" (UID: \"88de11d8b1aaec126153d44e87c4b5dd\") " pod="kube-system/kube-controller-manager-multinode-025000"
	I0612 15:03:43.677639   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.124157    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-025000?timeout=10s\": dial tcp 172.23.200.184:8443: connect: connection refused" interval="800ms"
	I0612 15:03:43.677744   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: I0612 22:02:26.206204    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-025000"
	I0612 15:03:43.677826   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.207259    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.200.184:8443: connect: connection refused" node="multinode-025000"
	I0612 15:03:43.677826   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: W0612 22:02:26.576346    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-025000&limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:43.677826   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.576490    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-025000&limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:43.677942   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: W0612 22:02:26.832319    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:43.677942   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.832430    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:43.678020   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: W0612 22:02:26.847085    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:43.678020   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.847226    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:43.678101   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: W0612 22:02:26.894179    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:43.678101   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.894251    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:43.678178   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: I0612 22:02:26.910045    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76517193a960ab9d78db3449c72d4b8285bbf321f947b06f8088487d36423fd7"
	I0612 15:03:43.678178   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.925848    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-025000?timeout=10s\": dial tcp 172.23.200.184:8443: connect: connection refused" interval="1.6s"
	I0612 15:03:43.678260   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.967442    1517 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.23.200.184:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-025000.17d860d995e00c7b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-025000,UID:multinode-025000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-025000,},FirstTimestamp:2024-06-12 22:02:25.480502395 +0000 UTC m=+0.149388345,LastTimestamp:2024-06-12 22:02:25.480502395 +0000 UTC m=+0.149388345,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-0
25000,}"
	I0612 15:03:43.678260   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 kubelet[1517]: I0612 22:02:27.008640    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-025000"
	I0612 15:03:43.678260   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 kubelet[1517]: E0612 22:02:27.009541    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.200.184:8443: connect: connection refused" node="multinode-025000"
	I0612 15:03:43.678338   13752 command_runner.go:130] > Jun 12 22:02:28 multinode-025000 kubelet[1517]: I0612 22:02:28.611782    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-025000"
	I0612 15:03:43.678338   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.067503    1517 kubelet_node_status.go:112] "Node was previously registered" node="multinode-025000"
	I0612 15:03:43.678338   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.069193    1517 kubelet_node_status.go:76] "Successfully registered node" node="multinode-025000"
	I0612 15:03:43.678419   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.078543    1517 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0612 15:03:43.678419   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.083746    1517 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0612 15:03:43.678419   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.087512    1517 setters.go:580] "Node became not ready" node="multinode-025000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-12T22:02:31Z","lastTransitionTime":"2024-06-12T22:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0612 15:03:43.678496   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.485482    1517 apiserver.go:52] "Watching apiserver"
	I0612 15:03:43.678496   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.491838    1517 topology_manager.go:215] "Topology Admit Handler" podUID="1f004a05-3f5f-444b-9ac0-88f0e23da904" podNamespace="kube-system" podName="kindnet-bqlg8"
	I0612 15:03:43.678496   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.492246    1517 topology_manager.go:215] "Topology Admit Handler" podUID="10b24fa7-8eea-4fbb-ab18-404e853aa7ab" podNamespace="kube-system" podName="kube-proxy-47lr8"
	I0612 15:03:43.678496   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.493249    1517 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-025000" podUID="6b429685-b322-4b00-83fc-743786ff40e1"
	I0612 15:03:43.678576   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.494355    1517 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-025000" podUID="630bafc4-4576-4974-b638-7ab52dcfec18"
	I0612 15:03:43.678652   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.494642    1517 topology_manager.go:215] "Topology Admit Handler" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-vgcxw"
	I0612 15:03:43.678652   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.494763    1517 topology_manager.go:215] "Topology Admit Handler" podUID="d20f7489-1aa1-44b8-9221-4d1849884be4" podNamespace="kube-system" podName="storage-provisioner"
	I0612 15:03:43.678726   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.494876    1517 topology_manager.go:215] "Topology Admit Handler" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4" podNamespace="default" podName="busybox-fc5497c4f-45qqd"
	I0612 15:03:43.678787   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.495127    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.678787   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.495306    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.678787   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.499353    1517 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0612 15:03:43.678787   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.541672    1517 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-025000"
	I0612 15:03:43.678787   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.557538    1517 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-025000"
	I0612 15:03:43.678787   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593012    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1f004a05-3f5f-444b-9ac0-88f0e23da904-cni-cfg\") pod \"kindnet-bqlg8\" (UID: \"1f004a05-3f5f-444b-9ac0-88f0e23da904\") " pod="kube-system/kindnet-bqlg8"
	I0612 15:03:43.678787   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593075    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10b24fa7-8eea-4fbb-ab18-404e853aa7ab-lib-modules\") pod \"kube-proxy-47lr8\" (UID: \"10b24fa7-8eea-4fbb-ab18-404e853aa7ab\") " pod="kube-system/kube-proxy-47lr8"
	I0612 15:03:43.678787   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593188    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f004a05-3f5f-444b-9ac0-88f0e23da904-lib-modules\") pod \"kindnet-bqlg8\" (UID: \"1f004a05-3f5f-444b-9ac0-88f0e23da904\") " pod="kube-system/kindnet-bqlg8"
	I0612 15:03:43.678787   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593684    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d20f7489-1aa1-44b8-9221-4d1849884be4-tmp\") pod \"storage-provisioner\" (UID: \"d20f7489-1aa1-44b8-9221-4d1849884be4\") " pod="kube-system/storage-provisioner"
	I0612 15:03:43.678787   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593711    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f004a05-3f5f-444b-9ac0-88f0e23da904-xtables-lock\") pod \"kindnet-bqlg8\" (UID: \"1f004a05-3f5f-444b-9ac0-88f0e23da904\") " pod="kube-system/kindnet-bqlg8"
	I0612 15:03:43.678787   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593752    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10b24fa7-8eea-4fbb-ab18-404e853aa7ab-xtables-lock\") pod \"kube-proxy-47lr8\" (UID: \"10b24fa7-8eea-4fbb-ab18-404e853aa7ab\") " pod="kube-system/kube-proxy-47lr8"
	I0612 15:03:43.678787   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.594460    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:43.678787   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.594613    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:02:32.094549489 +0000 UTC m=+6.763435539 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:43.678787   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.622682    1517 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04dcbc8e258f964f689941b6844769d9" path="/var/lib/kubelet/pods/04dcbc8e258f964f689941b6844769d9/volumes"
	I0612 15:03:43.678787   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.623801    1517 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="610414aa8160848c0b6b79ea0a700b83" path="/var/lib/kubelet/pods/610414aa8160848c0b6b79ea0a700b83/volumes"
	I0612 15:03:43.678787   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.626972    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.679371   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.627014    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.679371   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.627132    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:02:32.127114564 +0000 UTC m=+6.796000614 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.679371   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.673848    1517 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-025000" podStartSLOduration=0.673800971 podStartE2EDuration="673.800971ms" podCreationTimestamp="2024-06-12 22:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-12 22:02:31.632162175 +0000 UTC m=+6.301048225" watchObservedRunningTime="2024-06-12 22:02:31.673800971 +0000 UTC m=+6.342686921"
	I0612 15:03:43.679557   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.674234    1517 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-025000" podStartSLOduration=0.674226172 podStartE2EDuration="674.226172ms" podCreationTimestamp="2024-06-12 22:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-12 22:02:31.67337587 +0000 UTC m=+6.342261920" watchObservedRunningTime="2024-06-12 22:02:31.674226172 +0000 UTC m=+6.343112222"
	I0612 15:03:43.679592   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: E0612 22:02:32.099190    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:43.679592   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: E0612 22:02:32.099284    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:02:33.099266752 +0000 UTC m=+7.768152702 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:43.679592   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: E0612 22:02:32.199774    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.679592   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: E0612 22:02:32.199808    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.680251   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: E0612 22:02:32.199864    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:02:33.199845384 +0000 UTC m=+7.868731334 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.680379   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: I0612 22:02:32.394461    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5287b61207e62a3ec16408b08af503462a8bed945d441422fd0b733e752d6217"
	I0612 15:03:43.680430   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: I0612 22:02:32.774495    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a20975d81b350d77bb2d9d69d861d19ddbcbab33211643f61e2aaa0d6dc46a9d"
	I0612 15:03:43.680430   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: I0612 22:02:32.791274    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="435c56b0fbbbb46e4b392ac6467c2054ce16271a6b3dad2d53f747f839b4b3cd"
	I0612 15:03:43.680430   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.106313    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:43.680430   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.106394    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:02:35.106375874 +0000 UTC m=+9.775261924 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:43.680430   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.208318    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.680430   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.208375    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.680430   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.208431    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:02:35.208413609 +0000 UTC m=+9.877299559 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.680430   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.617822    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.680430   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.618103    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.680430   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.125562    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:43.680430   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.126376    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:02:39.12633293 +0000 UTC m=+13.795218980 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:43.680430   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.226548    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.680430   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.226607    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.680430   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.226693    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:02:39.226674161 +0000 UTC m=+13.895560111 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.680430   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.616712    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.680430   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.617047    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:43.681015   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.617270    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.681078   13752 command_runner.go:130] > Jun 12 22:02:37 multinode-025000 kubelet[1517]: E0612 22:02:37.618147    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.681078   13752 command_runner.go:130] > Jun 12 22:02:37 multinode-025000 kubelet[1517]: E0612 22:02:37.618607    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.681231   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.164650    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:43.681255   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.164956    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:02:47.164935524 +0000 UTC m=+21.833821574 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:43.681318   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.265764    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.681318   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.266004    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.681318   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.266086    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:02:47.266062158 +0000 UTC m=+21.934948208 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.681318   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.616548    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.681318   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.617577    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.681318   13752 command_runner.go:130] > Jun 12 22:02:40 multinode-025000 kubelet[1517]: E0612 22:02:40.619032    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:43.681318   13752 command_runner.go:130] > Jun 12 22:02:41 multinode-025000 kubelet[1517]: E0612 22:02:41.617010    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.681318   13752 command_runner.go:130] > Jun 12 22:02:41 multinode-025000 kubelet[1517]: E0612 22:02:41.617816    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.681318   13752 command_runner.go:130] > Jun 12 22:02:43 multinode-025000 kubelet[1517]: E0612 22:02:43.617105    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.681318   13752 command_runner.go:130] > Jun 12 22:02:43 multinode-025000 kubelet[1517]: E0612 22:02:43.617755    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.681318   13752 command_runner.go:130] > Jun 12 22:02:45 multinode-025000 kubelet[1517]: E0612 22:02:45.617112    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.681318   13752 command_runner.go:130] > Jun 12 22:02:45 multinode-025000 kubelet[1517]: E0612 22:02:45.618034    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.681318   13752 command_runner.go:130] > Jun 12 22:02:45 multinode-025000 kubelet[1517]: E0612 22:02:45.621402    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:43.681318   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.234271    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:43.681318   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.234420    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:03:03.234402815 +0000 UTC m=+37.903288765 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:43.681318   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.335532    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.681858   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.335632    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.681923   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.335696    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:03:03.33568009 +0000 UTC m=+38.004566140 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.681923   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.617048    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.682076   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.617530    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.682120   13752 command_runner.go:130] > Jun 12 22:02:49 multinode-025000 kubelet[1517]: E0612 22:02:49.617040    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.682186   13752 command_runner.go:130] > Jun 12 22:02:49 multinode-025000 kubelet[1517]: E0612 22:02:49.617673    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.682218   13752 command_runner.go:130] > Jun 12 22:02:50 multinode-025000 kubelet[1517]: E0612 22:02:50.623368    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:43.682252   13752 command_runner.go:130] > Jun 12 22:02:51 multinode-025000 kubelet[1517]: E0612 22:02:51.616848    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.682350   13752 command_runner.go:130] > Jun 12 22:02:51 multinode-025000 kubelet[1517]: E0612 22:02:51.617656    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.682380   13752 command_runner.go:130] > Jun 12 22:02:53 multinode-025000 kubelet[1517]: E0612 22:02:53.617130    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.682422   13752 command_runner.go:130] > Jun 12 22:02:53 multinode-025000 kubelet[1517]: E0612 22:02:53.617679    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.682496   13752 command_runner.go:130] > Jun 12 22:02:55 multinode-025000 kubelet[1517]: E0612 22:02:55.617082    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.682526   13752 command_runner.go:130] > Jun 12 22:02:55 multinode-025000 kubelet[1517]: E0612 22:02:55.617595    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.682559   13752 command_runner.go:130] > Jun 12 22:02:55 multinode-025000 kubelet[1517]: E0612 22:02:55.624795    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:43.682622   13752 command_runner.go:130] > Jun 12 22:02:57 multinode-025000 kubelet[1517]: E0612 22:02:57.617430    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.682622   13752 command_runner.go:130] > Jun 12 22:02:57 multinode-025000 kubelet[1517]: E0612 22:02:57.618180    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.682622   13752 command_runner.go:130] > Jun 12 22:02:59 multinode-025000 kubelet[1517]: E0612 22:02:59.616577    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.682622   13752 command_runner.go:130] > Jun 12 22:02:59 multinode-025000 kubelet[1517]: E0612 22:02:59.617339    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.682622   13752 command_runner.go:130] > Jun 12 22:03:00 multinode-025000 kubelet[1517]: E0612 22:03:00.626741    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:43.682622   13752 command_runner.go:130] > Jun 12 22:03:01 multinode-025000 kubelet[1517]: E0612 22:03:01.617176    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.682622   13752 command_runner.go:130] > Jun 12 22:03:01 multinode-025000 kubelet[1517]: E0612 22:03:01.617573    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.682622   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: I0612 22:03:03.236005    1517 scope.go:117] "RemoveContainer" containerID="61910369e0d4ba1a5246a686e904c168fc7467d239e475004146ddf2835e8e78"
	I0612 15:03:43.682622   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: I0612 22:03:03.236962    1517 scope.go:117] "RemoveContainer" containerID="3546a5c00321078fed32a806a318f4e56e89801ea54ea9463adf37f82327b38a"
	I0612 15:03:43.682622   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.239739    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d20f7489-1aa1-44b8-9221-4d1849884be4)\"" pod="kube-system/storage-provisioner" podUID="d20f7489-1aa1-44b8-9221-4d1849884be4"
	I0612 15:03:43.682622   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.284341    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:43.682622   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.284420    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:03:35.284401461 +0000 UTC m=+69.953287411 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:43.682622   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.385432    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.682622   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.385531    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.682622   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.385613    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:03:35.385594617 +0000 UTC m=+70.054480667 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:43.682622   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.616668    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.683200   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.617100    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.683240   13752 command_runner.go:130] > Jun 12 22:03:05 multinode-025000 kubelet[1517]: E0612 22:03:05.617214    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.683240   13752 command_runner.go:130] > Jun 12 22:03:05 multinode-025000 kubelet[1517]: E0612 22:03:05.617674    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.683240   13752 command_runner.go:130] > Jun 12 22:03:05 multinode-025000 kubelet[1517]: E0612 22:03:05.628542    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:43.683240   13752 command_runner.go:130] > Jun 12 22:03:07 multinode-025000 kubelet[1517]: E0612 22:03:07.616455    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.683240   13752 command_runner.go:130] > Jun 12 22:03:07 multinode-025000 kubelet[1517]: E0612 22:03:07.617581    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.683240   13752 command_runner.go:130] > Jun 12 22:03:09 multinode-025000 kubelet[1517]: E0612 22:03:09.617093    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:43.683240   13752 command_runner.go:130] > Jun 12 22:03:09 multinode-025000 kubelet[1517]: E0612 22:03:09.617405    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:43.683240   13752 command_runner.go:130] > Jun 12 22:03:13 multinode-025000 kubelet[1517]: I0612 22:03:13.617647    1517 scope.go:117] "RemoveContainer" containerID="3546a5c00321078fed32a806a318f4e56e89801ea54ea9463adf37f82327b38a"
	I0612 15:03:43.683240   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]: I0612 22:03:25.637114    1517 scope.go:117] "RemoveContainer" containerID="0749f44d03561395230c8a60a41853a49502741bf3bcd45acc924d346061f5b0"
	I0612 15:03:43.683240   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]: E0612 22:03:25.663119    1517 iptables.go:577] "Could not set up iptables canary" err=<
	I0612 15:03:43.683240   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0612 15:03:43.683240   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0612 15:03:43.683240   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0612 15:03:43.683240   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0612 15:03:43.683240   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]: I0612 22:03:25.699754    1517 scope.go:117] "RemoveContainer" containerID="2455f315465b9508a3fe1025d7150342eedb3cb09eb5f8fd9b2cbbffe1306db0"
	I0612 15:03:43.723363   13752 logs.go:123] Gathering logs for describe nodes ...
	I0612 15:03:43.723363   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0612 15:03:43.918572   13752 command_runner.go:130] > Name:               multinode-025000
	I0612 15:03:43.918624   13752 command_runner.go:130] > Roles:              control-plane
	I0612 15:03:43.918660   13752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0612 15:03:43.918660   13752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0612 15:03:43.918660   13752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0612 15:03:43.918660   13752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-025000
	I0612 15:03:43.918698   13752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0612 15:03:43.918698   13752 command_runner.go:130] >                     minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97
	I0612 15:03:43.918734   13752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-025000
	I0612 15:03:43.918734   13752 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0612 15:03:43.918762   13752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_12T14_39_32_0700
	I0612 15:03:43.918762   13752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0612 15:03:43.918762   13752 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0612 15:03:43.918762   13752 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0612 15:03:43.918762   13752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0612 15:03:43.918762   13752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0612 15:03:43.918762   13752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0612 15:03:43.918762   13752 command_runner.go:130] > CreationTimestamp:  Wed, 12 Jun 2024 21:39:28 +0000
	I0612 15:03:43.918762   13752 command_runner.go:130] > Taints:             <none>
	I0612 15:03:43.918762   13752 command_runner.go:130] > Unschedulable:      false
	I0612 15:03:43.918762   13752 command_runner.go:130] > Lease:
	I0612 15:03:43.918762   13752 command_runner.go:130] >   HolderIdentity:  multinode-025000
	I0612 15:03:43.918762   13752 command_runner.go:130] >   AcquireTime:     <unset>
	I0612 15:03:43.918762   13752 command_runner.go:130] >   RenewTime:       Wed, 12 Jun 2024 22:03:42 +0000
	I0612 15:03:43.918762   13752 command_runner.go:130] > Conditions:
	I0612 15:03:43.918762   13752 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0612 15:03:43.918762   13752 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0612 15:03:43.918762   13752 command_runner.go:130] >   MemoryPressure   False   Wed, 12 Jun 2024 22:03:11 +0000   Wed, 12 Jun 2024 21:39:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0612 15:03:43.918762   13752 command_runner.go:130] >   DiskPressure     False   Wed, 12 Jun 2024 22:03:11 +0000   Wed, 12 Jun 2024 21:39:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0612 15:03:43.918762   13752 command_runner.go:130] >   PIDPressure      False   Wed, 12 Jun 2024 22:03:11 +0000   Wed, 12 Jun 2024 21:39:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0612 15:03:43.918762   13752 command_runner.go:130] >   Ready            True    Wed, 12 Jun 2024 22:03:11 +0000   Wed, 12 Jun 2024 22:03:11 +0000   KubeletReady                 kubelet is posting ready status
	I0612 15:03:43.918762   13752 command_runner.go:130] > Addresses:
	I0612 15:03:43.918762   13752 command_runner.go:130] >   InternalIP:  172.23.200.184
	I0612 15:03:43.918762   13752 command_runner.go:130] >   Hostname:    multinode-025000
	I0612 15:03:43.918762   13752 command_runner.go:130] > Capacity:
	I0612 15:03:43.918762   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:43.918762   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:43.918762   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:43.918762   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:43.918762   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:43.918762   13752 command_runner.go:130] > Allocatable:
	I0612 15:03:43.918762   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:43.918762   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:43.918762   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:43.918762   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:43.918762   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:43.918762   13752 command_runner.go:130] > System Info:
	I0612 15:03:43.918762   13752 command_runner.go:130] >   Machine ID:                 e65e28dfa5bf4f27a0123e4ae1007793
	I0612 15:03:43.918762   13752 command_runner.go:130] >   System UUID:                3e5a42d3-ea80-0c4d-ad18-4b76e4f3e22f
	I0612 15:03:43.918762   13752 command_runner.go:130] >   Boot ID:                    0efecf43-b070-4a8f-b542-4d1fd07306ad
	I0612 15:03:43.919355   13752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0612 15:03:43.919355   13752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0612 15:03:43.919355   13752 command_runner.go:130] >   Operating System:           linux
	I0612 15:03:43.919355   13752 command_runner.go:130] >   Architecture:               amd64
	I0612 15:03:43.919355   13752 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0612 15:03:43.919403   13752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0612 15:03:43.919403   13752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0612 15:03:43.919403   13752 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0612 15:03:43.919403   13752 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0612 15:03:43.919485   13752 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0612 15:03:43.919485   13752 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0612 15:03:43.919521   13752 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0612 15:03:43.919552   13752 command_runner.go:130] >   default                     busybox-fc5497c4f-45qqd                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0612 15:03:43.919552   13752 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-vgcxw                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     23m
	I0612 15:03:43.919587   13752 command_runner.go:130] >   kube-system                 etcd-multinode-025000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         72s
	I0612 15:03:43.919587   13752 command_runner.go:130] >   kube-system                 kindnet-bqlg8                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	I0612 15:03:43.919639   13752 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-025000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	I0612 15:03:43.919673   13752 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-025000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0612 15:03:43.919742   13752 command_runner.go:130] >   kube-system                 kube-proxy-47lr8                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0612 15:03:43.919742   13752 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-025000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0612 15:03:43.919777   13752 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0612 15:03:43.919777   13752 command_runner.go:130] > Allocated resources:
	I0612 15:03:43.919777   13752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0612 15:03:43.919854   13752 command_runner.go:130] >   Resource           Requests     Limits
	I0612 15:03:43.919873   13752 command_runner.go:130] >   --------           --------     ------
	I0612 15:03:43.919873   13752 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0612 15:03:43.919899   13752 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0612 15:03:43.919899   13752 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0612 15:03:43.919899   13752 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0612 15:03:43.919929   13752 command_runner.go:130] > Events:
	I0612 15:03:43.919929   13752 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0612 15:03:43.919967   13752 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0612 15:03:43.919967   13752 command_runner.go:130] >   Normal  Starting                 23m                kube-proxy       
	I0612 15:03:43.919967   13752 command_runner.go:130] >   Normal  Starting                 70s                kube-proxy       
	I0612 15:03:43.919967   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node multinode-025000 status is now: NodeHasSufficientMemory
	I0612 15:03:43.920025   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node multinode-025000 status is now: NodeHasNoDiskPressure
	I0612 15:03:43.920025   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node multinode-025000 status is now: NodeHasSufficientPID
	I0612 15:03:43.920025   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:43.920086   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-025000 status is now: NodeHasNoDiskPressure
	I0612 15:03:43.920086   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-025000 status is now: NodeHasSufficientMemory
	I0612 15:03:43.920129   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-025000 status is now: NodeHasSufficientPID
	I0612 15:03:43.920129   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:43.920129   13752 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0612 15:03:43.920189   13752 command_runner.go:130] >   Normal  RegisteredNode           23m                node-controller  Node multinode-025000 event: Registered Node multinode-025000 in Controller
	I0612 15:03:43.920189   13752 command_runner.go:130] >   Normal  NodeReady                23m                kubelet          Node multinode-025000 status is now: NodeReady
	I0612 15:03:43.920189   13752 command_runner.go:130] >   Normal  Starting                 78s                kubelet          Starting kubelet.
	I0612 15:03:43.920236   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  78s (x8 over 78s)  kubelet          Node multinode-025000 status is now: NodeHasSufficientMemory
	I0612 15:03:43.920271   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    78s (x8 over 78s)  kubelet          Node multinode-025000 status is now: NodeHasNoDiskPressure
	I0612 15:03:43.920306   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     78s (x7 over 78s)  kubelet          Node multinode-025000 status is now: NodeHasSufficientPID
	I0612 15:03:43.920337   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:43.920337   13752 command_runner.go:130] >   Normal  RegisteredNode           59s                node-controller  Node multinode-025000 event: Registered Node multinode-025000 in Controller
	I0612 15:03:43.920337   13752 command_runner.go:130] > Name:               multinode-025000-m02
	I0612 15:03:43.920337   13752 command_runner.go:130] > Roles:              <none>
	I0612 15:03:43.920337   13752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0612 15:03:43.920337   13752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0612 15:03:43.920337   13752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0612 15:03:43.920337   13752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-025000-m02
	I0612 15:03:43.920337   13752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0612 15:03:43.920337   13752 command_runner.go:130] >                     minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97
	I0612 15:03:43.920337   13752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-025000
	I0612 15:03:43.920337   13752 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0612 15:03:43.920337   13752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_12T14_42_39_0700
	I0612 15:03:43.920337   13752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0612 15:03:43.920337   13752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0612 15:03:43.920337   13752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0612 15:03:43.920337   13752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0612 15:03:43.920337   13752 command_runner.go:130] > CreationTimestamp:  Wed, 12 Jun 2024 21:42:39 +0000
	I0612 15:03:43.920337   13752 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0612 15:03:43.920337   13752 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0612 15:03:43.920337   13752 command_runner.go:130] > Unschedulable:      false
	I0612 15:03:43.920337   13752 command_runner.go:130] > Lease:
	I0612 15:03:43.920337   13752 command_runner.go:130] >   HolderIdentity:  multinode-025000-m02
	I0612 15:03:43.920337   13752 command_runner.go:130] >   AcquireTime:     <unset>
	I0612 15:03:43.920337   13752 command_runner.go:130] >   RenewTime:       Wed, 12 Jun 2024 21:59:20 +0000
	I0612 15:03:43.920337   13752 command_runner.go:130] > Conditions:
	I0612 15:03:43.920337   13752 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0612 15:03:43.920337   13752 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0612 15:03:43.920337   13752 command_runner.go:130] >   MemoryPressure   Unknown   Wed, 12 Jun 2024 21:58:59 +0000   Wed, 12 Jun 2024 22:03:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:43.920337   13752 command_runner.go:130] >   DiskPressure     Unknown   Wed, 12 Jun 2024 21:58:59 +0000   Wed, 12 Jun 2024 22:03:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:43.920337   13752 command_runner.go:130] >   PIDPressure      Unknown   Wed, 12 Jun 2024 21:58:59 +0000   Wed, 12 Jun 2024 22:03:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:43.920337   13752 command_runner.go:130] >   Ready            Unknown   Wed, 12 Jun 2024 21:58:59 +0000   Wed, 12 Jun 2024 22:03:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:43.920337   13752 command_runner.go:130] > Addresses:
	I0612 15:03:43.920337   13752 command_runner.go:130] >   InternalIP:  172.23.196.105
	I0612 15:03:43.920337   13752 command_runner.go:130] >   Hostname:    multinode-025000-m02
	I0612 15:03:43.920337   13752 command_runner.go:130] > Capacity:
	I0612 15:03:43.920337   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:43.920866   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:43.920866   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:43.920866   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:43.920866   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:43.920915   13752 command_runner.go:130] > Allocatable:
	I0612 15:03:43.920915   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:43.920952   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:43.920952   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:43.920952   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:43.920952   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:43.920952   13752 command_runner.go:130] > System Info:
	I0612 15:03:43.920952   13752 command_runner.go:130] >   Machine ID:                 c11d7ff5518449f8bc8169a1fd7b0c4b
	I0612 15:03:43.920952   13752 command_runner.go:130] >   System UUID:                3b021c48-8479-f34c-83c2-77b944a77c5e
	I0612 15:03:43.920952   13752 command_runner.go:130] >   Boot ID:                    67e77c09-c6b2-4c01-b167-2481dd4a7a96
	I0612 15:03:43.920952   13752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0612 15:03:43.921034   13752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0612 15:03:43.921034   13752 command_runner.go:130] >   Operating System:           linux
	I0612 15:03:43.921034   13752 command_runner.go:130] >   Architecture:               amd64
	I0612 15:03:43.921034   13752 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0612 15:03:43.921071   13752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0612 15:03:43.921071   13752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0612 15:03:43.921071   13752 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0612 15:03:43.921107   13752 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0612 15:03:43.921107   13752 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0612 15:03:43.921107   13752 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0612 15:03:43.921167   13752 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0612 15:03:43.921167   13752 command_runner.go:130] >   default                     busybox-fc5497c4f-9bsls    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0612 15:03:43.921167   13752 command_runner.go:130] >   kube-system                 kindnet-v4cqk              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21m
	I0612 15:03:43.921233   13752 command_runner.go:130] >   kube-system                 kube-proxy-tdcdp           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0612 15:03:43.921233   13752 command_runner.go:130] > Allocated resources:
	I0612 15:03:43.921233   13752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0612 15:03:43.921276   13752 command_runner.go:130] >   Resource           Requests   Limits
	I0612 15:03:43.921276   13752 command_runner.go:130] >   --------           --------   ------
	I0612 15:03:43.921276   13752 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0612 15:03:43.921325   13752 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0612 15:03:43.921325   13752 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0612 15:03:43.921325   13752 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0612 15:03:43.921367   13752 command_runner.go:130] > Events:
	I0612 15:03:43.921367   13752 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0612 15:03:43.921367   13752 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0612 15:03:43.921408   13752 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0612 15:03:43.921408   13752 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-025000-m02 event: Registered Node multinode-025000-m02 in Controller
	I0612 15:03:43.921448   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-025000-m02 status is now: NodeHasSufficientMemory
	I0612 15:03:43.921448   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-025000-m02 status is now: NodeHasNoDiskPressure
	I0612 15:03:43.921448   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-025000-m02 status is now: NodeHasSufficientPID
	I0612 15:03:43.921448   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:43.921448   13752 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-025000-m02 status is now: NodeReady
	I0612 15:03:43.921448   13752 command_runner.go:130] >   Normal  RegisteredNode           59s                node-controller  Node multinode-025000-m02 event: Registered Node multinode-025000-m02 in Controller
	I0612 15:03:43.921448   13752 command_runner.go:130] >   Normal  NodeNotReady             19s                node-controller  Node multinode-025000-m02 status is now: NodeNotReady
	I0612 15:03:43.921448   13752 command_runner.go:130] > Name:               multinode-025000-m03
	I0612 15:03:43.921448   13752 command_runner.go:130] > Roles:              <none>
	I0612 15:03:43.921448   13752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0612 15:03:43.921448   13752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0612 15:03:43.921448   13752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0612 15:03:43.921448   13752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-025000-m03
	I0612 15:03:43.921448   13752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0612 15:03:43.921448   13752 command_runner.go:130] >                     minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97
	I0612 15:03:43.921448   13752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-025000
	I0612 15:03:43.921448   13752 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0612 15:03:43.921448   13752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_12T14_57_59_0700
	I0612 15:03:43.921448   13752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0612 15:03:43.921448   13752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0612 15:03:43.921448   13752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0612 15:03:43.921448   13752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0612 15:03:43.921448   13752 command_runner.go:130] > CreationTimestamp:  Wed, 12 Jun 2024 21:57:58 +0000
	I0612 15:03:43.921448   13752 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0612 15:03:43.921448   13752 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0612 15:03:43.921448   13752 command_runner.go:130] > Unschedulable:      false
	I0612 15:03:43.921448   13752 command_runner.go:130] > Lease:
	I0612 15:03:43.921448   13752 command_runner.go:130] >   HolderIdentity:  multinode-025000-m03
	I0612 15:03:43.921448   13752 command_runner.go:130] >   AcquireTime:     <unset>
	I0612 15:03:43.921448   13752 command_runner.go:130] >   RenewTime:       Wed, 12 Jun 2024 21:59:00 +0000
	I0612 15:03:43.921448   13752 command_runner.go:130] > Conditions:
	I0612 15:03:43.921448   13752 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0612 15:03:43.921448   13752 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0612 15:03:43.921448   13752 command_runner.go:130] >   MemoryPressure   Unknown   Wed, 12 Jun 2024 21:58:06 +0000   Wed, 12 Jun 2024 21:59:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:43.921448   13752 command_runner.go:130] >   DiskPressure     Unknown   Wed, 12 Jun 2024 21:58:06 +0000   Wed, 12 Jun 2024 21:59:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:43.921448   13752 command_runner.go:130] >   PIDPressure      Unknown   Wed, 12 Jun 2024 21:58:06 +0000   Wed, 12 Jun 2024 21:59:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:43.921448   13752 command_runner.go:130] >   Ready            Unknown   Wed, 12 Jun 2024 21:58:06 +0000   Wed, 12 Jun 2024 21:59:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:43.921448   13752 command_runner.go:130] > Addresses:
	I0612 15:03:43.921448   13752 command_runner.go:130] >   InternalIP:  172.23.206.72
	I0612 15:03:43.921448   13752 command_runner.go:130] >   Hostname:    multinode-025000-m03
	I0612 15:03:43.921448   13752 command_runner.go:130] > Capacity:
	I0612 15:03:43.921448   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:43.921448   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:43.921448   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:43.921448   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:43.921448   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:43.921448   13752 command_runner.go:130] > Allocatable:
	I0612 15:03:43.921448   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:43.921448   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:43.921448   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:43.921448   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:43.921448   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:43.921448   13752 command_runner.go:130] > System Info:
	I0612 15:03:43.921448   13752 command_runner.go:130] >   Machine ID:                 b62d5e6740dc42d880d6595ac7dd57ae
	I0612 15:03:43.922029   13752 command_runner.go:130] >   System UUID:                31a13a9b-b7c6-6643-8352-fb322079216a
	I0612 15:03:43.922029   13752 command_runner.go:130] >   Boot ID:                    a21b9eff-2471-4589-9e35-5845aae64358
	I0612 15:03:43.922029   13752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0612 15:03:43.922029   13752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0612 15:03:43.922029   13752 command_runner.go:130] >   Operating System:           linux
	I0612 15:03:43.922029   13752 command_runner.go:130] >   Architecture:               amd64
	I0612 15:03:43.922029   13752 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0612 15:03:43.922029   13752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0612 15:03:43.922029   13752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0612 15:03:43.922139   13752 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0612 15:03:43.922139   13752 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0612 15:03:43.922181   13752 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0612 15:03:43.922181   13752 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0612 15:03:43.922207   13752 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0612 15:03:43.922223   13752 command_runner.go:130] >   kube-system                 kindnet-8252q       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	I0612 15:03:43.922223   13752 command_runner.go:130] >   kube-system                 kube-proxy-7jwdg    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	I0612 15:03:43.922223   13752 command_runner.go:130] > Allocated resources:
	I0612 15:03:43.922223   13752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0612 15:03:43.922223   13752 command_runner.go:130] >   Resource           Requests   Limits
	I0612 15:03:43.922292   13752 command_runner.go:130] >   --------           --------   ------
	I0612 15:03:43.922292   13752 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0612 15:03:43.922292   13752 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0612 15:03:43.922292   13752 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0612 15:03:43.922292   13752 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0612 15:03:43.922292   13752 command_runner.go:130] > Events:
	I0612 15:03:43.922371   13752 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0612 15:03:43.922371   13752 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0612 15:03:43.922371   13752 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0612 15:03:43.922371   13752 command_runner.go:130] >   Normal  Starting                 5m42s                  kube-proxy       
	I0612 15:03:43.922371   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:43.922453   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-025000-m03 status is now: NodeHasSufficientMemory
	I0612 15:03:43.922453   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-025000-m03 status is now: NodeHasNoDiskPressure
	I0612 15:03:43.922453   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-025000-m03 status is now: NodeHasSufficientPID
	I0612 15:03:43.922453   13752 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-025000-m03 status is now: NodeReady
	I0612 15:03:43.922453   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m45s (x2 over 5m45s)  kubelet          Node multinode-025000-m03 status is now: NodeHasSufficientMemory
	I0612 15:03:43.922548   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m45s (x2 over 5m45s)  kubelet          Node multinode-025000-m03 status is now: NodeHasNoDiskPressure
	I0612 15:03:43.922548   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m45s (x2 over 5m45s)  kubelet          Node multinode-025000-m03 status is now: NodeHasSufficientPID
	I0612 15:03:43.922548   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m45s                  kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:43.922625   13752 command_runner.go:130] >   Normal  RegisteredNode           5m44s                  node-controller  Node multinode-025000-m03 event: Registered Node multinode-025000-m03 in Controller
	I0612 15:03:43.922661   13752 command_runner.go:130] >   Normal  NodeReady                5m37s                  kubelet          Node multinode-025000-m03 status is now: NodeReady
	I0612 15:03:43.922661   13752 command_runner.go:130] >   Normal  NodeNotReady             3m58s                  node-controller  Node multinode-025000-m03 status is now: NodeNotReady
	I0612 15:03:43.922661   13752 command_runner.go:130] >   Normal  RegisteredNode           59s                    node-controller  Node multinode-025000-m03 event: Registered Node multinode-025000-m03 in Controller
	I0612 15:03:43.932229   13752 logs.go:123] Gathering logs for kube-scheduler [755750ecd1e3] ...
	I0612 15:03:43.932229   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 755750ecd1e3"
	I0612 15:03:43.942673   13752 command_runner.go:130] ! I0612 22:02:28.771072       1 serving.go:380] Generated self-signed cert in-memory
	I0612 15:03:43.942673   13752 command_runner.go:130] ! W0612 22:02:31.003959       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0612 15:03:43.957875   13752 command_runner.go:130] ! W0612 22:02:31.004072       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:43.957875   13752 command_runner.go:130] ! W0612 22:02:31.004087       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0612 15:03:43.957875   13752 command_runner.go:130] ! W0612 22:02:31.004098       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0612 15:03:43.957875   13752 command_runner.go:130] ! I0612 22:02:31.034273       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0612 15:03:43.957875   13752 command_runner.go:130] ! I0612 22:02:31.034440       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:43.957875   13752 command_runner.go:130] ! I0612 22:02:31.039288       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0612 15:03:43.957875   13752 command_runner.go:130] ! I0612 22:02:31.039331       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0612 15:03:43.957875   13752 command_runner.go:130] ! I0612 22:02:31.039699       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0612 15:03:43.957875   13752 command_runner.go:130] ! I0612 22:02:31.040018       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 15:03:43.957875   13752 command_runner.go:130] ! I0612 22:02:31.139849       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0612 15:03:43.959985   13752 logs.go:123] Gathering logs for kube-proxy [227a905829b0] ...
	I0612 15:03:43.959985   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227a905829b0"
	I0612 15:03:43.983086   13752 command_runner.go:130] ! I0612 22:02:33.538961       1 server_linux.go:69] "Using iptables proxy"
	I0612 15:03:43.983086   13752 command_runner.go:130] ! I0612 22:02:33.585761       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.200.184"]
	I0612 15:03:43.983086   13752 command_runner.go:130] ! I0612 22:02:33.754056       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 15:03:43.983086   13752 command_runner.go:130] ! I0612 22:02:33.754118       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 15:03:43.983086   13752 command_runner.go:130] ! I0612 22:02:33.754141       1 server_linux.go:165] "Using iptables Proxier"
	I0612 15:03:43.983086   13752 command_runner.go:130] ! I0612 22:02:33.765449       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 15:03:43.986095   13752 command_runner.go:130] ! I0612 22:02:33.766192       1 server.go:872] "Version info" version="v1.30.1"
	I0612 15:03:43.986095   13752 command_runner.go:130] ! I0612 22:02:33.766246       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:43.986132   13752 command_runner.go:130] ! I0612 22:02:33.769980       1 config.go:192] "Starting service config controller"
	I0612 15:03:43.986132   13752 command_runner.go:130] ! I0612 22:02:33.770461       1 config.go:101] "Starting endpoint slice config controller"
	I0612 15:03:43.986174   13752 command_runner.go:130] ! I0612 22:02:33.770493       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 15:03:43.986174   13752 command_runner.go:130] ! I0612 22:02:33.770630       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 15:03:43.986174   13752 command_runner.go:130] ! I0612 22:02:33.773852       1 config.go:319] "Starting node config controller"
	I0612 15:03:43.986238   13752 command_runner.go:130] ! I0612 22:02:33.773944       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 15:03:43.986238   13752 command_runner.go:130] ! I0612 22:02:33.870743       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0612 15:03:43.986265   13752 command_runner.go:130] ! I0612 22:02:33.870698       1 shared_informer.go:320] Caches are synced for service config
	I0612 15:03:43.986265   13752 command_runner.go:130] ! I0612 22:02:33.882534       1 shared_informer.go:320] Caches are synced for node config
	I0612 15:03:43.988383   13752 logs.go:123] Gathering logs for kindnet [4d60d82f6bc5] ...
	I0612 15:03:43.988383   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d60d82f6bc5"
	I0612 15:03:44.012769   13752 command_runner.go:130] ! I0612 21:48:53.982546       1 main.go:227] handling current node
	I0612 15:03:44.012769   13752 command_runner.go:130] ! I0612 21:48:53.982561       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.016637   13752 command_runner.go:130] ! I0612 21:48:53.982568       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.016674   13752 command_runner.go:130] ! I0612 21:48:53.982982       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.016722   13752 command_runner.go:130] ! I0612 21:48:53.983049       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.016722   13752 command_runner.go:130] ! I0612 21:49:03.989649       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.016722   13752 command_runner.go:130] ! I0612 21:49:03.989791       1 main.go:227] handling current node
	I0612 15:03:44.016767   13752 command_runner.go:130] ! I0612 21:49:03.989809       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.016973   13752 command_runner.go:130] ! I0612 21:49:03.989817       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.016973   13752 command_runner.go:130] ! I0612 21:49:03.990195       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.016973   13752 command_runner.go:130] ! I0612 21:49:03.990415       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.016973   13752 command_runner.go:130] ! I0612 21:49:14.000384       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.016973   13752 command_runner.go:130] ! I0612 21:49:14.000493       1 main.go:227] handling current node
	I0612 15:03:44.016973   13752 command_runner.go:130] ! I0612 21:49:14.000507       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.016973   13752 command_runner.go:130] ! I0612 21:49:14.000513       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.017720   13752 command_runner.go:130] ! I0612 21:49:14.000627       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.017720   13752 command_runner.go:130] ! I0612 21:49:14.000640       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.017782   13752 command_runner.go:130] ! I0612 21:49:24.006829       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.017782   13752 command_runner.go:130] ! I0612 21:49:24.006871       1 main.go:227] handling current node
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:24.006883       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:24.006889       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:24.007645       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:24.007745       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:34.016679       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:34.016806       1 main.go:227] handling current node
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:34.016838       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:34.016845       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:34.017149       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:34.017279       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:44.025835       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:44.025933       1 main.go:227] handling current node
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:44.025947       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:44.025955       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:44.026381       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:44.026533       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:54.033148       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:54.033257       1 main.go:227] handling current node
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:54.033273       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:54.033281       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:54.033402       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:49:54.033435       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:50:04.046279       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:50:04.046719       1 main.go:227] handling current node
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:50:04.046832       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.017813   13752 command_runner.go:130] ! I0612 21:50:04.047109       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.018359   13752 command_runner.go:130] ! I0612 21:50:04.047537       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.018359   13752 command_runner.go:130] ! I0612 21:50:04.047572       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.018359   13752 command_runner.go:130] ! I0612 21:50:14.064171       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.018473   13752 command_runner.go:130] ! I0612 21:50:14.064216       1 main.go:227] handling current node
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:14.064230       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:14.064236       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:14.064574       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:14.064665       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:24.071894       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:24.071935       1 main.go:227] handling current node
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:24.071949       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:24.071955       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:24.072148       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:24.072184       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:34.086428       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:34.086522       1 main.go:227] handling current node
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:34.086536       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:34.086543       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:34.086690       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:34.086707       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:44.093862       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:44.093905       1 main.go:227] handling current node
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:44.093919       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:44.093925       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:44.094840       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:44.094916       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:54.102869       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:54.103074       1 main.go:227] handling current node
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:54.103091       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:54.103100       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:54.103237       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:50:54.103276       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:51:04.110391       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:51:04.110501       1 main.go:227] handling current node
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:51:04.110517       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:51:04.110556       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:51:04.110721       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:51:04.110794       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:51:14.121126       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:51:14.121263       1 main.go:227] handling current node
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:51:14.121280       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:51:14.121288       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:51:14.121430       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:51:14.121462       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:51:24.131659       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:51:24.131690       1 main.go:227] handling current node
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:51:24.131702       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:51:24.131708       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.018511   13752 command_runner.go:130] ! I0612 21:51:24.132287       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.019115   13752 command_runner.go:130] ! I0612 21:51:24.132319       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.019115   13752 command_runner.go:130] ! I0612 21:51:34.139419       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.019115   13752 command_runner.go:130] ! I0612 21:51:34.139546       1 main.go:227] handling current node
	I0612 15:03:44.019165   13752 command_runner.go:130] ! I0612 21:51:34.139561       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.019165   13752 command_runner.go:130] ! I0612 21:51:34.139570       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.019165   13752 command_runner.go:130] ! I0612 21:51:34.140149       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.019165   13752 command_runner.go:130] ! I0612 21:51:34.140253       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.019217   13752 command_runner.go:130] ! I0612 21:51:44.152295       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.019217   13752 command_runner.go:130] ! I0612 21:51:44.152430       1 main.go:227] handling current node
	I0612 15:03:44.019217   13752 command_runner.go:130] ! I0612 21:51:44.152464       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.019257   13752 command_runner.go:130] ! I0612 21:51:44.152471       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.019257   13752 command_runner.go:130] ! I0612 21:51:44.153262       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.019257   13752 command_runner.go:130] ! I0612 21:51:44.153471       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.019257   13752 command_runner.go:130] ! I0612 21:51:54.160684       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.019307   13752 command_runner.go:130] ! I0612 21:51:54.160938       1 main.go:227] handling current node
	I0612 15:03:44.019307   13752 command_runner.go:130] ! I0612 21:51:54.160953       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.019347   13752 command_runner.go:130] ! I0612 21:51:54.160960       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.019347   13752 command_runner.go:130] ! I0612 21:51:54.161457       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.019388   13752 command_runner.go:130] ! I0612 21:51:54.161482       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.019388   13752 command_runner.go:130] ! I0612 21:52:04.170421       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.019428   13752 command_runner.go:130] ! I0612 21:52:04.170526       1 main.go:227] handling current node
	I0612 15:03:44.019428   13752 command_runner.go:130] ! I0612 21:52:04.170541       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.019428   13752 command_runner.go:130] ! I0612 21:52:04.170548       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.019428   13752 command_runner.go:130] ! I0612 21:52:04.171076       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.019428   13752 command_runner.go:130] ! I0612 21:52:04.171113       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.019428   13752 command_runner.go:130] ! I0612 21:52:14.180403       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.019486   13752 command_runner.go:130] ! I0612 21:52:14.180490       1 main.go:227] handling current node
	I0612 15:03:44.019486   13752 command_runner.go:130] ! I0612 21:52:14.180508       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.019486   13752 command_runner.go:130] ! I0612 21:52:14.180516       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.019486   13752 command_runner.go:130] ! I0612 21:52:14.180994       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.019535   13752 command_runner.go:130] ! I0612 21:52:14.181032       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.019535   13752 command_runner.go:130] ! I0612 21:52:24.195314       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.019535   13752 command_runner.go:130] ! I0612 21:52:24.195545       1 main.go:227] handling current node
	I0612 15:03:44.019535   13752 command_runner.go:130] ! I0612 21:52:24.195735       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.019535   13752 command_runner.go:130] ! I0612 21:52:24.195807       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.019666   13752 command_runner.go:130] ! I0612 21:52:24.196026       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.019666   13752 command_runner.go:130] ! I0612 21:52:24.196064       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.019735   13752 command_runner.go:130] ! I0612 21:52:34.202013       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.019735   13752 command_runner.go:130] ! I0612 21:52:34.202806       1 main.go:227] handling current node
	I0612 15:03:44.019807   13752 command_runner.go:130] ! I0612 21:52:34.202932       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.019807   13752 command_runner.go:130] ! I0612 21:52:34.203029       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.019867   13752 command_runner.go:130] ! I0612 21:52:34.203265       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.019942   13752 command_runner.go:130] ! I0612 21:52:34.203299       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.019942   13752 command_runner.go:130] ! I0612 21:52:44.209271       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.020009   13752 command_runner.go:130] ! I0612 21:52:44.209440       1 main.go:227] handling current node
	I0612 15:03:44.020009   13752 command_runner.go:130] ! I0612 21:52:44.209476       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.020071   13752 command_runner.go:130] ! I0612 21:52:44.209546       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.020071   13752 command_runner.go:130] ! I0612 21:52:44.209839       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.020071   13752 command_runner.go:130] ! I0612 21:52:44.210283       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.020071   13752 command_runner.go:130] ! I0612 21:52:54.223351       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.020071   13752 command_runner.go:130] ! I0612 21:52:54.223443       1 main.go:227] handling current node
	I0612 15:03:44.020139   13752 command_runner.go:130] ! I0612 21:52:54.223459       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.020139   13752 command_runner.go:130] ! I0612 21:52:54.223466       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.020139   13752 command_runner.go:130] ! I0612 21:52:54.223810       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.020139   13752 command_runner.go:130] ! I0612 21:52:54.223840       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.020139   13752 command_runner.go:130] ! I0612 21:53:04.236876       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.020139   13752 command_runner.go:130] ! I0612 21:53:04.237155       1 main.go:227] handling current node
	I0612 15:03:44.020221   13752 command_runner.go:130] ! I0612 21:53:04.237949       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.020221   13752 command_runner.go:130] ! I0612 21:53:04.238341       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.020221   13752 command_runner.go:130] ! I0612 21:53:04.238673       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.020221   13752 command_runner.go:130] ! I0612 21:53:04.238707       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.020293   13752 command_runner.go:130] ! I0612 21:53:14.245069       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.020293   13752 command_runner.go:130] ! I0612 21:53:14.245110       1 main.go:227] handling current node
	I0612 15:03:44.020293   13752 command_runner.go:130] ! I0612 21:53:14.245122       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.020293   13752 command_runner.go:130] ! I0612 21:53:14.245131       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.020355   13752 command_runner.go:130] ! I0612 21:53:14.245834       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.020355   13752 command_runner.go:130] ! I0612 21:53:14.245932       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.020355   13752 command_runner.go:130] ! I0612 21:53:24.258923       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.020355   13752 command_runner.go:130] ! I0612 21:53:24.258965       1 main.go:227] handling current node
	I0612 15:03:44.020355   13752 command_runner.go:130] ! I0612 21:53:24.258977       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.020355   13752 command_runner.go:130] ! I0612 21:53:24.258983       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.020415   13752 command_runner.go:130] ! I0612 21:53:24.259367       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.020415   13752 command_runner.go:130] ! I0612 21:53:24.259399       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.020415   13752 command_runner.go:130] ! I0612 21:53:34.265573       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.020415   13752 command_runner.go:130] ! I0612 21:53:34.265738       1 main.go:227] handling current node
	I0612 15:03:44.020415   13752 command_runner.go:130] ! I0612 21:53:34.265787       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.020415   13752 command_runner.go:130] ! I0612 21:53:34.265797       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.020415   13752 command_runner.go:130] ! I0612 21:53:34.266180       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.020486   13752 command_runner.go:130] ! I0612 21:53:34.266257       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.020486   13752 command_runner.go:130] ! I0612 21:53:44.278968       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.020486   13752 command_runner.go:130] ! I0612 21:53:44.279173       1 main.go:227] handling current node
	I0612 15:03:44.020536   13752 command_runner.go:130] ! I0612 21:53:44.279207       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.020536   13752 command_runner.go:130] ! I0612 21:53:44.279294       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.020536   13752 command_runner.go:130] ! I0612 21:53:44.279698       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.020536   13752 command_runner.go:130] ! I0612 21:53:44.279829       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.020536   13752 command_runner.go:130] ! I0612 21:53:54.290366       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.020536   13752 command_runner.go:130] ! I0612 21:53:54.290472       1 main.go:227] handling current node
	I0612 15:03:44.020597   13752 command_runner.go:130] ! I0612 21:53:54.290487       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.020597   13752 command_runner.go:130] ! I0612 21:53:54.290494       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.020597   13752 command_runner.go:130] ! I0612 21:53:54.291158       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.020597   13752 command_runner.go:130] ! I0612 21:53:54.291263       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.020660   13752 command_runner.go:130] ! I0612 21:54:04.308014       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.020660   13752 command_runner.go:130] ! I0612 21:54:04.308117       1 main.go:227] handling current node
	I0612 15:03:44.020660   13752 command_runner.go:130] ! I0612 21:54:04.308133       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.020660   13752 command_runner.go:130] ! I0612 21:54:04.308142       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.020736   13752 command_runner.go:130] ! I0612 21:54:04.308605       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.020736   13752 command_runner.go:130] ! I0612 21:54:04.308643       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.020736   13752 command_runner.go:130] ! I0612 21:54:14.316271       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.020736   13752 command_runner.go:130] ! I0612 21:54:14.316380       1 main.go:227] handling current node
	I0612 15:03:44.020736   13752 command_runner.go:130] ! I0612 21:54:14.316396       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.020736   13752 command_runner.go:130] ! I0612 21:54:14.316403       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.020791   13752 command_runner.go:130] ! I0612 21:54:14.316942       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.020791   13752 command_runner.go:130] ! I0612 21:54:14.316959       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.020791   13752 command_runner.go:130] ! I0612 21:54:24.330853       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.020791   13752 command_runner.go:130] ! I0612 21:54:24.331009       1 main.go:227] handling current node
	I0612 15:03:44.020791   13752 command_runner.go:130] ! I0612 21:54:24.331025       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.020852   13752 command_runner.go:130] ! I0612 21:54:24.331033       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.020852   13752 command_runner.go:130] ! I0612 21:54:24.331178       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.020852   13752 command_runner.go:130] ! I0612 21:54:24.331213       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.020852   13752 command_runner.go:130] ! I0612 21:54:34.340396       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.020852   13752 command_runner.go:130] ! I0612 21:54:34.340543       1 main.go:227] handling current node
	I0612 15:03:44.020852   13752 command_runner.go:130] ! I0612 21:54:34.340558       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.020910   13752 command_runner.go:130] ! I0612 21:54:34.340565       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.020910   13752 command_runner.go:130] ! I0612 21:54:34.340924       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.020910   13752 command_runner.go:130] ! I0612 21:54:34.341013       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.020910   13752 command_runner.go:130] ! I0612 21:54:44.347468       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.020910   13752 command_runner.go:130] ! I0612 21:54:44.347599       1 main.go:227] handling current node
	I0612 15:03:44.020971   13752 command_runner.go:130] ! I0612 21:54:44.347614       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.020971   13752 command_runner.go:130] ! I0612 21:54:44.347622       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.020971   13752 command_runner.go:130] ! I0612 21:54:44.348279       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.020971   13752 command_runner.go:130] ! I0612 21:54:44.348396       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.020971   13752 command_runner.go:130] ! I0612 21:54:54.364900       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.021034   13752 command_runner.go:130] ! I0612 21:54:54.365031       1 main.go:227] handling current node
	I0612 15:03:44.021034   13752 command_runner.go:130] ! I0612 21:54:54.365046       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.021034   13752 command_runner.go:130] ! I0612 21:54:54.365054       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.021034   13752 command_runner.go:130] ! I0612 21:54:54.365542       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.021034   13752 command_runner.go:130] ! I0612 21:54:54.365727       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.021096   13752 command_runner.go:130] ! I0612 21:55:04.381041       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.021096   13752 command_runner.go:130] ! I0612 21:55:04.381087       1 main.go:227] handling current node
	I0612 15:03:44.021096   13752 command_runner.go:130] ! I0612 21:55:04.381103       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.021096   13752 command_runner.go:130] ! I0612 21:55:04.381110       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.021096   13752 command_runner.go:130] ! I0612 21:55:04.381700       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.021161   13752 command_runner.go:130] ! I0612 21:55:04.381853       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.021161   13752 command_runner.go:130] ! I0612 21:55:14.395619       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.021161   13752 command_runner.go:130] ! I0612 21:55:14.395666       1 main.go:227] handling current node
	I0612 15:03:44.021161   13752 command_runner.go:130] ! I0612 21:55:14.395679       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.021161   13752 command_runner.go:130] ! I0612 21:55:14.395686       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.021223   13752 command_runner.go:130] ! I0612 21:55:14.396514       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.021223   13752 command_runner.go:130] ! I0612 21:55:14.396536       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.021223   13752 command_runner.go:130] ! I0612 21:55:24.411927       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.021223   13752 command_runner.go:130] ! I0612 21:55:24.412012       1 main.go:227] handling current node
	I0612 15:03:44.021223   13752 command_runner.go:130] ! I0612 21:55:24.412028       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.021296   13752 command_runner.go:130] ! I0612 21:55:24.412036       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.021296   13752 command_runner.go:130] ! I0612 21:55:24.412568       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.021296   13752 command_runner.go:130] ! I0612 21:55:24.412661       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.021296   13752 command_runner.go:130] ! I0612 21:55:34.420011       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.021296   13752 command_runner.go:130] ! I0612 21:55:34.420100       1 main.go:227] handling current node
	I0612 15:03:44.021296   13752 command_runner.go:130] ! I0612 21:55:34.420115       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.021382   13752 command_runner.go:130] ! I0612 21:55:34.420122       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.021382   13752 command_runner.go:130] ! I0612 21:55:34.420481       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.021382   13752 command_runner.go:130] ! I0612 21:55:34.420570       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.021382   13752 command_runner.go:130] ! I0612 21:55:44.432502       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.021441   13752 command_runner.go:130] ! I0612 21:55:44.432604       1 main.go:227] handling current node
	I0612 15:03:44.021441   13752 command_runner.go:130] ! I0612 21:55:44.432620       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.021441   13752 command_runner.go:130] ! I0612 21:55:44.432632       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.021441   13752 command_runner.go:130] ! I0612 21:55:44.432881       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.021502   13752 command_runner.go:130] ! I0612 21:55:44.433061       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.021502   13752 command_runner.go:130] ! I0612 21:55:54.446991       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.021502   13752 command_runner.go:130] ! I0612 21:55:54.447440       1 main.go:227] handling current node
	I0612 15:03:44.021502   13752 command_runner.go:130] ! I0612 21:55:54.447622       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.021502   13752 command_runner.go:130] ! I0612 21:55:54.447655       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.021502   13752 command_runner.go:130] ! I0612 21:55:54.447830       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.021565   13752 command_runner.go:130] ! I0612 21:55:54.447901       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.021565   13752 command_runner.go:130] ! I0612 21:56:04.463393       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.021565   13752 command_runner.go:130] ! I0612 21:56:04.463546       1 main.go:227] handling current node
	I0612 15:03:44.021565   13752 command_runner.go:130] ! I0612 21:56:04.463575       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.021565   13752 command_runner.go:130] ! I0612 21:56:04.463596       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.021628   13752 command_runner.go:130] ! I0612 21:56:04.463900       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.021628   13752 command_runner.go:130] ! I0612 21:56:04.463932       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.021628   13752 command_runner.go:130] ! I0612 21:56:14.477690       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.021628   13752 command_runner.go:130] ! I0612 21:56:14.477837       1 main.go:227] handling current node
	I0612 15:03:44.021670   13752 command_runner.go:130] ! I0612 21:56:14.477852       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.021670   13752 command_runner.go:130] ! I0612 21:56:14.477860       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.021670   13752 command_runner.go:130] ! I0612 21:56:14.478029       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.021732   13752 command_runner.go:130] ! I0612 21:56:14.478096       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.021732   13752 command_runner.go:130] ! I0612 21:56:24.485525       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.021732   13752 command_runner.go:130] ! I0612 21:56:24.485620       1 main.go:227] handling current node
	I0612 15:03:44.021732   13752 command_runner.go:130] ! I0612 21:56:24.485655       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.021732   13752 command_runner.go:130] ! I0612 21:56:24.485663       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.021732   13752 command_runner.go:130] ! I0612 21:56:24.486202       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.021837   13752 command_runner.go:130] ! I0612 21:56:24.486237       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.021837   13752 command_runner.go:130] ! I0612 21:56:34.502904       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.021837   13752 command_runner.go:130] ! I0612 21:56:34.502951       1 main.go:227] handling current node
	I0612 15:03:44.021837   13752 command_runner.go:130] ! I0612 21:56:34.502964       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.021837   13752 command_runner.go:130] ! I0612 21:56:34.502970       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.021899   13752 command_runner.go:130] ! I0612 21:56:34.503088       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.021899   13752 command_runner.go:130] ! I0612 21:56:34.503684       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.021899   13752 command_runner.go:130] ! I0612 21:56:44.512292       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.021899   13752 command_runner.go:130] ! I0612 21:56:44.512356       1 main.go:227] handling current node
	I0612 15:03:44.021899   13752 command_runner.go:130] ! I0612 21:56:44.512368       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.021964   13752 command_runner.go:130] ! I0612 21:56:44.512374       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.021964   13752 command_runner.go:130] ! I0612 21:56:44.512909       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.021964   13752 command_runner.go:130] ! I0612 21:56:44.513033       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.021964   13752 command_runner.go:130] ! I0612 21:56:54.520903       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.021964   13752 command_runner.go:130] ! I0612 21:56:54.521017       1 main.go:227] handling current node
	I0612 15:03:44.021964   13752 command_runner.go:130] ! I0612 21:56:54.521034       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.021964   13752 command_runner.go:130] ! I0612 21:56:54.521041       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.022037   13752 command_runner.go:130] ! I0612 21:56:54.521441       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.022037   13752 command_runner.go:130] ! I0612 21:56:54.521665       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.022037   13752 command_runner.go:130] ! I0612 21:57:04.535531       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.022037   13752 command_runner.go:130] ! I0612 21:57:04.535625       1 main.go:227] handling current node
	I0612 15:03:44.022094   13752 command_runner.go:130] ! I0612 21:57:04.535665       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.022094   13752 command_runner.go:130] ! I0612 21:57:04.535672       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.022094   13752 command_runner.go:130] ! I0612 21:57:04.536272       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.022094   13752 command_runner.go:130] ! I0612 21:57:04.536355       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.022094   13752 command_runner.go:130] ! I0612 21:57:14.559304       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.022094   13752 command_runner.go:130] ! I0612 21:57:14.559354       1 main.go:227] handling current node
	I0612 15:03:44.022094   13752 command_runner.go:130] ! I0612 21:57:14.559375       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.022181   13752 command_runner.go:130] ! I0612 21:57:14.559382       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.022181   13752 command_runner.go:130] ! I0612 21:57:14.559735       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.022181   13752 command_runner.go:130] ! I0612 21:57:14.560332       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.022262   13752 command_runner.go:130] ! I0612 21:57:24.568057       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.022262   13752 command_runner.go:130] ! I0612 21:57:24.568103       1 main.go:227] handling current node
	I0612 15:03:44.022262   13752 command_runner.go:130] ! I0612 21:57:24.568116       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.022262   13752 command_runner.go:130] ! I0612 21:57:24.568122       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.022262   13752 command_runner.go:130] ! I0612 21:57:24.568938       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.022262   13752 command_runner.go:130] ! I0612 21:57:24.569042       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.022331   13752 command_runner.go:130] ! I0612 21:57:34.584121       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.022331   13752 command_runner.go:130] ! I0612 21:57:34.584277       1 main.go:227] handling current node
	I0612 15:03:44.022384   13752 command_runner.go:130] ! I0612 21:57:34.584502       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.022384   13752 command_runner.go:130] ! I0612 21:57:34.584607       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.022462   13752 command_runner.go:130] ! I0612 21:57:34.584995       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.022562   13752 command_runner.go:130] ! I0612 21:57:34.585095       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.022562   13752 command_runner.go:130] ! I0612 21:57:44.600201       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.022562   13752 command_runner.go:130] ! I0612 21:57:44.600339       1 main.go:227] handling current node
	I0612 15:03:44.022562   13752 command_runner.go:130] ! I0612 21:57:44.600353       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.022562   13752 command_runner.go:130] ! I0612 21:57:44.600361       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.022619   13752 command_runner.go:130] ! I0612 21:57:44.600842       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:44.022680   13752 command_runner.go:130] ! I0612 21:57:44.600859       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:44.022680   13752 command_runner.go:130] ! I0612 21:57:54.615436       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.022680   13752 command_runner.go:130] ! I0612 21:57:54.615497       1 main.go:227] handling current node
	I0612 15:03:44.022680   13752 command_runner.go:130] ! I0612 21:57:54.615511       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.022680   13752 command_runner.go:130] ! I0612 21:57:54.615536       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.022734   13752 command_runner.go:130] ! I0612 21:58:04.629487       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.022734   13752 command_runner.go:130] ! I0612 21:58:04.629657       1 main.go:227] handling current node
	I0612 15:03:44.022734   13752 command_runner.go:130] ! I0612 21:58:04.629797       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.022734   13752 command_runner.go:130] ! I0612 21:58:04.629891       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.022814   13752 command_runner.go:130] ! I0612 21:58:04.630131       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:44.022814   13752 command_runner.go:130] ! I0612 21:58:04.631059       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:44.022814   13752 command_runner.go:130] ! I0612 21:58:04.631221       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.23.206.72 Flags: [] Table: 0} 
	I0612 15:03:44.022814   13752 command_runner.go:130] ! I0612 21:58:14.647500       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.022814   13752 command_runner.go:130] ! I0612 21:58:14.647527       1 main.go:227] handling current node
	I0612 15:03:44.022892   13752 command_runner.go:130] ! I0612 21:58:14.647539       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.022892   13752 command_runner.go:130] ! I0612 21:58:14.647544       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.022892   13752 command_runner.go:130] ! I0612 21:58:14.647661       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:44.022892   13752 command_runner.go:130] ! I0612 21:58:14.647672       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:44.022892   13752 command_runner.go:130] ! I0612 21:58:24.655905       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.022892   13752 command_runner.go:130] ! I0612 21:58:24.656017       1 main.go:227] handling current node
	I0612 15:03:44.022995   13752 command_runner.go:130] ! I0612 21:58:24.656064       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.022995   13752 command_runner.go:130] ! I0612 21:58:24.656140       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.022995   13752 command_runner.go:130] ! I0612 21:58:24.656636       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:44.022995   13752 command_runner.go:130] ! I0612 21:58:24.656721       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:44.022995   13752 command_runner.go:130] ! I0612 21:58:34.670254       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.022995   13752 command_runner.go:130] ! I0612 21:58:34.670590       1 main.go:227] handling current node
	I0612 15:03:44.023071   13752 command_runner.go:130] ! I0612 21:58:34.670966       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.023071   13752 command_runner.go:130] ! I0612 21:58:34.671845       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.023071   13752 command_runner.go:130] ! I0612 21:58:34.672269       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:44.023071   13752 command_runner.go:130] ! I0612 21:58:34.672369       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:44.023127   13752 command_runner.go:130] ! I0612 21:58:44.682684       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.023127   13752 command_runner.go:130] ! I0612 21:58:44.682854       1 main.go:227] handling current node
	I0612 15:03:44.023127   13752 command_runner.go:130] ! I0612 21:58:44.682877       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.023127   13752 command_runner.go:130] ! I0612 21:58:44.682887       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.023127   13752 command_runner.go:130] ! I0612 21:58:44.683737       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:44.023223   13752 command_runner.go:130] ! I0612 21:58:44.683808       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:44.023223   13752 command_runner.go:130] ! I0612 21:58:54.691077       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.023287   13752 command_runner.go:130] ! I0612 21:58:54.691167       1 main.go:227] handling current node
	I0612 15:03:44.023287   13752 command_runner.go:130] ! I0612 21:58:54.691199       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.023287   13752 command_runner.go:130] ! I0612 21:58:54.691207       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.023393   13752 command_runner.go:130] ! I0612 21:58:54.691344       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:44.023393   13752 command_runner.go:130] ! I0612 21:58:54.691357       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:44.023515   13752 command_runner.go:130] ! I0612 21:59:04.700863       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.023515   13752 command_runner.go:130] ! I0612 21:59:04.701017       1 main.go:227] handling current node
	I0612 15:03:44.023515   13752 command_runner.go:130] ! I0612 21:59:04.701032       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.023515   13752 command_runner.go:130] ! I0612 21:59:04.701040       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.023515   13752 command_runner.go:130] ! I0612 21:59:04.701620       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:44.023515   13752 command_runner.go:130] ! I0612 21:59:04.701736       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:44.023515   13752 command_runner.go:130] ! I0612 21:59:14.717668       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.023515   13752 command_runner.go:130] ! I0612 21:59:14.717949       1 main.go:227] handling current node
	I0612 15:03:44.023598   13752 command_runner.go:130] ! I0612 21:59:14.717991       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.023647   13752 command_runner.go:130] ! I0612 21:59:14.718050       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.023751   13752 command_runner.go:130] ! I0612 21:59:14.718200       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:44.023751   13752 command_runner.go:130] ! I0612 21:59:14.718263       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:44.023799   13752 command_runner.go:130] ! I0612 21:59:24.724311       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.023799   13752 command_runner.go:130] ! I0612 21:59:24.724441       1 main.go:227] handling current node
	I0612 15:03:44.023799   13752 command_runner.go:130] ! I0612 21:59:24.724456       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.023799   13752 command_runner.go:130] ! I0612 21:59:24.724464       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.023799   13752 command_runner.go:130] ! I0612 21:59:24.724785       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:44.023799   13752 command_runner.go:130] ! I0612 21:59:24.724853       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:44.023799   13752 command_runner.go:130] ! I0612 21:59:34.737266       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.023888   13752 command_runner.go:130] ! I0612 21:59:34.737410       1 main.go:227] handling current node
	I0612 15:03:44.023888   13752 command_runner.go:130] ! I0612 21:59:34.737425       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.023888   13752 command_runner.go:130] ! I0612 21:59:34.737432       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.023888   13752 command_runner.go:130] ! I0612 21:59:34.738157       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:44.023987   13752 command_runner.go:130] ! I0612 21:59:34.738269       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:44.023987   13752 command_runner.go:130] ! I0612 21:59:44.746123       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.023987   13752 command_runner.go:130] ! I0612 21:59:44.746292       1 main.go:227] handling current node
	I0612 15:03:44.023987   13752 command_runner.go:130] ! I0612 21:59:44.746313       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.023987   13752 command_runner.go:130] ! I0612 21:59:44.746332       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.024070   13752 command_runner.go:130] ! I0612 21:59:44.746856       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:44.024070   13752 command_runner.go:130] ! I0612 21:59:44.746925       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:44.024070   13752 command_runner.go:130] ! I0612 21:59:54.752611       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:44.024070   13752 command_runner.go:130] ! I0612 21:59:54.752658       1 main.go:227] handling current node
	I0612 15:03:44.024070   13752 command_runner.go:130] ! I0612 21:59:54.752671       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:44.024070   13752 command_runner.go:130] ! I0612 21:59:54.752678       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:44.024070   13752 command_runner.go:130] ! I0612 21:59:54.753183       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:44.024070   13752 command_runner.go:130] ! I0612 21:59:54.753277       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:44.040336   13752 logs.go:123] Gathering logs for Docker ...
	I0612 15:03:44.040336   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0612 15:03:44.077287   13752 command_runner.go:130] > Jun 12 22:00:59 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0612 15:03:44.077384   13752 command_runner.go:130] > Jun 12 22:00:59 minikube cri-dockerd[222]: time="2024-06-12T22:00:59Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0612 15:03:44.077384   13752 command_runner.go:130] > Jun 12 22:00:59 minikube cri-dockerd[222]: time="2024-06-12T22:00:59Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0612 15:03:44.077384   13752 command_runner.go:130] > Jun 12 22:00:59 minikube cri-dockerd[222]: time="2024-06-12T22:00:59Z" level=info msg="Start docker client with request timeout 0s"
	I0612 15:03:44.077384   13752 command_runner.go:130] > Jun 12 22:00:59 minikube cri-dockerd[222]: time="2024-06-12T22:00:59Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0612 15:03:44.077384   13752 command_runner.go:130] > Jun 12 22:01:00 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0612 15:03:44.077527   13752 command_runner.go:130] > Jun 12 22:01:00 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0612 15:03:44.077527   13752 command_runner.go:130] > Jun 12 22:01:00 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0612 15:03:44.077527   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0612 15:03:44.077527   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0612 15:03:44.077588   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0612 15:03:44.077588   13752 command_runner.go:130] > Jun 12 22:01:02 minikube cri-dockerd[400]: time="2024-06-12T22:01:02Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0612 15:03:44.077641   13752 command_runner.go:130] > Jun 12 22:01:02 minikube cri-dockerd[400]: time="2024-06-12T22:01:02Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0612 15:03:44.077641   13752 command_runner.go:130] > Jun 12 22:01:02 minikube cri-dockerd[400]: time="2024-06-12T22:01:02Z" level=info msg="Start docker client with request timeout 0s"
	I0612 15:03:44.077641   13752 command_runner.go:130] > Jun 12 22:01:02 minikube cri-dockerd[400]: time="2024-06-12T22:01:02Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0612 15:03:44.077641   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0612 15:03:44.077788   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0612 15:03:44.077788   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0612 15:03:44.077788   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0612 15:03:44.077788   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0612 15:03:44.077788   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0612 15:03:44.077788   13752 command_runner.go:130] > Jun 12 22:01:04 minikube cri-dockerd[420]: time="2024-06-12T22:01:04Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0612 15:03:44.077788   13752 command_runner.go:130] > Jun 12 22:01:04 minikube cri-dockerd[420]: time="2024-06-12T22:01:04Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0612 15:03:44.077916   13752 command_runner.go:130] > Jun 12 22:01:04 minikube cri-dockerd[420]: time="2024-06-12T22:01:04Z" level=info msg="Start docker client with request timeout 0s"
	I0612 15:03:44.077916   13752 command_runner.go:130] > Jun 12 22:01:04 minikube cri-dockerd[420]: time="2024-06-12T22:01:04Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0612 15:03:44.077916   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0612 15:03:44.077916   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0612 15:03:44.077916   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0612 15:03:44.077916   13752 command_runner.go:130] > Jun 12 22:01:07 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0612 15:03:44.077916   13752 command_runner.go:130] > Jun 12 22:01:07 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0612 15:03:44.077916   13752 command_runner.go:130] > Jun 12 22:01:07 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0612 15:03:44.078051   13752 command_runner.go:130] > Jun 12 22:01:07 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0612 15:03:44.078051   13752 command_runner.go:130] > Jun 12 22:01:07 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0612 15:03:44.078051   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 systemd[1]: Starting Docker Application Container Engine...
	I0612 15:03:44.078051   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[647]: time="2024-06-12T22:01:50.903212301Z" level=info msg="Starting up"
	I0612 15:03:44.078051   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[647]: time="2024-06-12T22:01:50.904075211Z" level=info msg="containerd not running, starting managed containerd"
	I0612 15:03:44.078151   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[647]: time="2024-06-12T22:01:50.905013523Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=653
	I0612 15:03:44.078151   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.936715611Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0612 15:03:44.078185   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.960715605Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0612 15:03:44.078185   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.960765806Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0612 15:03:44.078185   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.960836707Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0612 15:03:44.078185   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.961045509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:44.078185   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.961654317Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:44.078295   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.961681417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:44.078295   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.961916220Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:44.078295   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.962126123Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:44.078295   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.962152723Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0612 15:03:44.078295   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.962167223Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:44.078436   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.962695730Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:44.078436   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.963400938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:44.078436   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.966083771Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:44.078436   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.966199872Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:44.078577   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.966330074Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:44.078577   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.966461076Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0612 15:03:44.078577   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.967039883Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0612 15:03:44.078577   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.967257385Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0612 15:03:44.078708   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.967282486Z" level=info msg="metadata content store policy set" policy=shared
	I0612 15:03:44.078708   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974400773Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0612 15:03:44.078708   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974631276Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0612 15:03:44.078708   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974732277Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0612 15:03:44.078708   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974755077Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0612 15:03:44.078708   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974771478Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0612 15:03:44.078829   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974844078Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0612 15:03:44.078829   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975137982Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0612 15:03:44.078829   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975475986Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0612 15:03:44.078829   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975634588Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0612 15:03:44.078917   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975657088Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0612 15:03:44.078917   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975672789Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0612 15:03:44.078917   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975691989Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0612 15:03:44.078986   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975721989Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0612 15:03:44.078986   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975744389Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0612 15:03:44.078986   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975762790Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0612 15:03:44.079074   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975776490Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0612 15:03:44.079074   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975789190Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0612 15:03:44.079074   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975800790Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0612 15:03:44.079074   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975819990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.079074   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975835091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.079163   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975847091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.079163   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975859491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.079163   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975870791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.079247   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975883291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.079247   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975894491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.079247   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975906891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.079247   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975920192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.079334   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975935492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.079334   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975947192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.079334   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975958792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.079433   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975971092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.079433   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975989492Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0612 15:03:44.079433   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976009893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976030193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976044093Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976167595Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976210595Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976227295Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976239996Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976250696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976263096Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976273096Z" level=info msg="NRI interface is disabled by configuration."
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976489199Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976766002Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976819403Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976839003Z" level=info msg="containerd successfully booted in 0.042772s"
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:51 multinode-025000 dockerd[647]: time="2024-06-12T22:01:51.958896661Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.175284022Z" level=info msg="Loading containers: start."
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.600253538Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.679773678Z" level=info msg="Loading containers: done."
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.711890198Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.712661408Z" level=info msg="Daemon has completed initialization"
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.774658419Z" level=info msg="API listen on /var/run/docker.sock"
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.774960723Z" level=info msg="API listen on [::]:2376"
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 systemd[1]: Started Docker Application Container Engine.
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 dockerd[647]: time="2024-06-12T22:02:17.292813222Z" level=info msg="Processing signal 'terminated'"
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 systemd[1]: Stopping Docker Application Container Engine...
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 dockerd[647]: time="2024-06-12T22:02:17.294859626Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 dockerd[647]: time="2024-06-12T22:02:17.295213927Z" level=info msg="Daemon shutdown complete"
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 dockerd[647]: time="2024-06-12T22:02:17.295258527Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 dockerd[647]: time="2024-06-12T22:02:17.295281927Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0612 15:03:44.079491   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 systemd[1]: docker.service: Deactivated successfully.
	I0612 15:03:44.080072   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 systemd[1]: Stopped Docker Application Container Engine.
	I0612 15:03:44.080072   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 systemd[1]: Starting Docker Application Container Engine...
	I0612 15:03:44.080072   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:18.376333019Z" level=info msg="Starting up"
	I0612 15:03:44.080072   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:18.377520222Z" level=info msg="containerd not running, starting managed containerd"
	I0612 15:03:44.080072   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:18.378639425Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1050
	I0612 15:03:44.080170   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.412854304Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0612 15:03:44.080170   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437361860Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0612 15:03:44.080217   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437471260Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0612 15:03:44.080266   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437558660Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0612 15:03:44.080266   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437600861Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:44.080266   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437638361Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:44.080362   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437674061Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:44.080400   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437957561Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:44.080447   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.438006462Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:44.080463   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.438028962Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0612 15:03:44.080463   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.438041362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:44.080532   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.438072362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:44.080532   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.438209862Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:44.080532   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441166869Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:44.080619   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441307169Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:44.080619   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441467569Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:44.080619   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441599370Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0612 15:03:44.080703   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441629870Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0612 15:03:44.080703   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441648170Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0612 15:03:44.080703   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441660470Z" level=info msg="metadata content store policy set" policy=shared
	I0612 15:03:44.080785   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442075271Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0612 15:03:44.080785   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442166571Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0612 15:03:44.080785   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442187871Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0612 15:03:44.080785   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442201971Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0612 15:03:44.080870   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442217371Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0612 15:03:44.080870   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442266071Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0612 15:03:44.080870   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442474372Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0612 15:03:44.080870   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442551072Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0612 15:03:44.080953   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442567272Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0612 15:03:44.080953   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442579372Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0612 15:03:44.080953   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442592672Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0612 15:03:44.081037   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442605072Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0612 15:03:44.081037   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442627672Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0612 15:03:44.081037   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442645772Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0612 15:03:44.081037   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442660172Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0612 15:03:44.081037   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442671872Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0612 15:03:44.081037   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442683572Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0612 15:03:44.081037   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442694372Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0612 15:03:44.081037   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442714572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.081037   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442727972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.081248   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442739972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.081248   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442754772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.081248   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442766572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.081248   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442778073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.081335   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442788873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.081335   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442800473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.081335   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442812673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.081442   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442826373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.081442   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442837973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.081442   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442849073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.081442   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442860373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.081522   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442875173Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0612 15:03:44.081522   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442974073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.081522   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442994973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.081608   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443006773Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0612 15:03:44.081608   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443066573Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0612 15:03:44.081608   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443088973Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0612 15:03:44.081608   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443100473Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0612 15:03:44.081689   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443113173Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0612 15:03:44.081689   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443144073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0612 15:03:44.081762   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443156573Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0612 15:03:44.081762   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443166273Z" level=info msg="NRI interface is disabled by configuration."
	I0612 15:03:44.081851   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443418874Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0612 15:03:44.081851   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443494174Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0612 15:03:44.081851   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443534574Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0612 15:03:44.081851   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443571274Z" level=info msg="containerd successfully booted in 0.033238s"
	I0612 15:03:44.081851   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.419757425Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0612 15:03:44.081851   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.449018892Z" level=info msg="Loading containers: start."
	I0612 15:03:44.081945   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.739331061Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0612 15:03:44.081999   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.815989438Z" level=info msg="Loading containers: done."
	I0612 15:03:44.081999   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.842536299Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0612 15:03:44.082057   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.842674899Z" level=info msg="Daemon has completed initialization"
	I0612 15:03:44.082086   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.885012997Z" level=info msg="API listen on /var/run/docker.sock"
	I0612 15:03:44.082086   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.885608398Z" level=info msg="API listen on [::]:2376"
	I0612 15:03:44.082086   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 systemd[1]: Started Docker Application Container Engine.
	I0612 15:03:44.082086   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0612 15:03:44.082086   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0612 15:03:44.082166   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0612 15:03:44.082166   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Start docker client with request timeout 0s"
	I0612 15:03:44.082166   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0612 15:03:44.082166   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Loaded network plugin cni"
	I0612 15:03:44.082247   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0612 15:03:44.082247   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0612 15:03:44.082247   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0612 15:03:44.082247   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0612 15:03:44.082329   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Start cri-dockerd grpc backend"
	I0612 15:03:44.082329   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0612 15:03:44.082329   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:25Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-vgcxw_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"894c58e9fe752e78b8e86cbbaabc1b6cc78ebcce37e4fc0bf1d838420f80a94d\""
	I0612 15:03:44.082416   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:25Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-45qqd_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"84a9b747663ca262bb35bb462ba83da0c104aee08928bd92a44297ee225d4c27\""
	I0612 15:03:44.082416   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.449365529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.082416   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.449468129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.082507   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.449499429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.082507   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.449616229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.082507   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.464315863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.082588   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.464397563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.082588   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.464444563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.082588   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.464765264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.082676   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.578440826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.082676   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.581064832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.082676   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.582145135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.082758   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.582532135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.082758   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.617373216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.082758   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.617486816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.082838   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.617504016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.082838   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.617593816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.082838   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/da184577f0371664d0a472b38bbfcfd866178308bf69eaabdaefb47d30a7057a/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:44.082919   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a228f6c30fdf44f53a40ac14a2a8b995155f743739957ac413c700924fc873ed/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:44.082919   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/20cbfb3fb853177b89366d165b6a1f67628b2c429266b77034ee6d1ca68b7bac/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:44.082919   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/76517193a960ab9d78db3449c72d4b8285bbf321f947b06f8088487d36423fd7/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:44.082998   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.094370315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.083048   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.094456516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.094499716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.094865116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.162934973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.163009674Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.163029074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.163177074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.167659984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.170028290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.170289390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.171053192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.233482736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.233861237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.234167138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.234578639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:31Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.197280978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.198144480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.198158780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.198341381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.213822116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.213977717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.214060117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083077   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.214298317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083656   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.234135963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.083656   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.234182263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.083656   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.234192563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083656   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.234264863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083656   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/435c56b0fbbbb46e4b392ac6467c2054ce16271a6b3dad2d53f747f839b4b3cd/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:44.083656   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5287b61207e62a3ec16408b08af503462a8bed945d441422fd0b733e752d6217/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:44.083656   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.564394224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.083656   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.564548725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.083850   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.564602325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.565056126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.630517377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.630663477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.630850678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.635052387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a20975d81b350d77bb2d9d69d861d19ddbcbab33211643f61e2aaa0d6dc46a9d/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.972834166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.973545267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.974028469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.974235669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 dockerd[1044]: time="2024-06-12T22:03:03.121297409Z" level=info msg="ignoring event" container=3546a5c00321078fed32a806a318f4e56e89801ea54ea9463adf37f82327b38a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:03.122616734Z" level=info msg="shim disconnected" id=3546a5c00321078fed32a806a318f4e56e89801ea54ea9463adf37f82327b38a namespace=moby
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:03.123474651Z" level=warning msg="cleaning up after shim disconnected" id=3546a5c00321078fed32a806a318f4e56e89801ea54ea9463adf37f82327b38a namespace=moby
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:03.123682355Z" level=info msg="cleaning up dead shim" namespace=moby
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:03:13 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:13.819634342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:03:13 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:13.819751243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:03:13 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:13.819788644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:03:13 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:13.820654753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.004015440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.004176540Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.004193540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.005298945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.006561551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.006633551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.083890   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.006681251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.084479   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.006796752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.084479   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:03:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/986567ef57643aec05ae5353795c364b380cb0f13c2ba98b1c4e04897e7b2e46/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:44.084479   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:03:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2434f89aefe0079002e81e136580c67ef1dca28bfa3b4c1e950241aea9663d4a/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0612 15:03:44.084479   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.542434894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.084479   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.542705495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.084479   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.542742195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.084686   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.543238997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.084686   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.606926167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:44.084686   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.606994167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:44.084686   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.607017268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.084686   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.607410069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:44.106614   13752 logs.go:123] Gathering logs for dmesg ...
	I0612 15:03:44.106614   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 15:03:44.135691   13752 command_runner.go:130] > [Jun12 22:00] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.131000] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.025099] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.064850] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.023448] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0612 15:03:44.135691   13752 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +5.508165] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +1.342262] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +1.269809] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +7.259362] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0612 15:03:44.135691   13752 command_runner.go:130] > [Jun12 22:01] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.155290] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	I0612 15:03:44.135691   13752 command_runner.go:130] > [Jun12 22:02] systemd-fstab-generator[971]: Ignoring "noauto" option for root device
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.095843] kauditd_printk_skb: 73 callbacks suppressed
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.507476] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.171390] systemd-fstab-generator[1022]: Ignoring "noauto" option for root device
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.210222] systemd-fstab-generator[1036]: Ignoring "noauto" option for root device
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +2.904531] systemd-fstab-generator[1224]: Ignoring "noauto" option for root device
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.189304] systemd-fstab-generator[1237]: Ignoring "noauto" option for root device
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.162041] systemd-fstab-generator[1248]: Ignoring "noauto" option for root device
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.261611] systemd-fstab-generator[1263]: Ignoring "noauto" option for root device
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.815328] systemd-fstab-generator[1374]: Ignoring "noauto" option for root device
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +0.096217] kauditd_printk_skb: 205 callbacks suppressed
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +3.646175] systemd-fstab-generator[1510]: Ignoring "noauto" option for root device
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +1.441935] kauditd_printk_skb: 54 callbacks suppressed
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +5.624550] kauditd_printk_skb: 20 callbacks suppressed
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +3.644538] systemd-fstab-generator[2322]: Ignoring "noauto" option for root device
	I0612 15:03:44.135691   13752 command_runner.go:130] > [  +8.250122] kauditd_printk_skb: 70 callbacks suppressed
	I0612 15:03:44.138662   13752 logs.go:123] Gathering logs for coredns [26e5daf354e3] ...
	I0612 15:03:44.138662   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26e5daf354e3"
	I0612 15:03:44.163879   13752 command_runner.go:130] > .:53
	I0612 15:03:44.164737   13752 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 9f7dc1bade6b5769fb289c890c4bc60268e74645c2ad6eb7d326d3f775fd92cb51f1ac39274894772e6760c31275de0003978af82f0f289ef8d45827e8140e48
	I0612 15:03:44.164737   13752 command_runner.go:130] > CoreDNS-1.11.1
	I0612 15:03:44.164737   13752 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0612 15:03:44.164737   13752 command_runner.go:130] > [INFO] 127.0.0.1:54952 - 9035 "HINFO IN 225709527310201015.7757756956422223857. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.039110892s
	I0612 15:03:44.165079   13752 logs.go:123] Gathering logs for kube-apiserver [bbe2d2e51b5f] ...
	I0612 15:03:44.165114   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe2d2e51b5f"
	I0612 15:03:44.187117   13752 command_runner.go:130] ! I0612 22:02:28.032945       1 options.go:221] external host was not specified, using 172.23.200.184
	I0612 15:03:44.193233   13752 command_runner.go:130] ! I0612 22:02:28.036290       1 server.go:148] Version: v1.30.1
	I0612 15:03:44.193233   13752 command_runner.go:130] ! I0612 22:02:28.036339       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:44.193311   13752 command_runner.go:130] ! I0612 22:02:28.916544       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0612 15:03:44.193311   13752 command_runner.go:130] ! I0612 22:02:28.917947       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0612 15:03:44.193311   13752 command_runner.go:130] ! I0612 22:02:28.921952       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0612 15:03:44.193395   13752 command_runner.go:130] ! I0612 22:02:28.922146       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0612 15:03:44.193395   13752 command_runner.go:130] ! I0612 22:02:28.922426       1 instance.go:299] Using reconciler: lease
	I0612 15:03:44.193395   13752 command_runner.go:130] ! I0612 22:02:29.570201       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0612 15:03:44.193478   13752 command_runner.go:130] ! W0612 22:02:29.570355       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:44.193478   13752 command_runner.go:130] ! I0612 22:02:29.801222       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0612 15:03:44.193576   13752 command_runner.go:130] ! I0612 22:02:29.801702       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0612 15:03:44.193576   13752 command_runner.go:130] ! I0612 22:02:30.046166       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0612 15:03:44.193576   13752 command_runner.go:130] ! I0612 22:02:30.216981       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0612 15:03:44.193576   13752 command_runner.go:130] ! I0612 22:02:30.231997       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0612 15:03:44.194179   13752 command_runner.go:130] ! W0612 22:02:30.232097       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:44.194179   13752 command_runner.go:130] ! W0612 22:02:30.232107       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:44.194179   13752 command_runner.go:130] ! I0612 22:02:30.232792       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0612 15:03:44.194179   13752 command_runner.go:130] ! W0612 22:02:30.232881       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:44.194179   13752 command_runner.go:130] ! I0612 22:02:30.233864       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0612 15:03:44.194179   13752 command_runner.go:130] ! I0612 22:02:30.235099       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0612 15:03:44.194179   13752 command_runner.go:130] ! W0612 22:02:30.235211       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0612 15:03:44.194336   13752 command_runner.go:130] ! W0612 22:02:30.235220       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0612 15:03:44.194336   13752 command_runner.go:130] ! I0612 22:02:30.237278       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0612 15:03:44.194378   13752 command_runner.go:130] ! W0612 22:02:30.237314       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0612 15:03:44.194409   13752 command_runner.go:130] ! I0612 22:02:30.238451       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.238555       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.238564       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! I0612 22:02:30.239199       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.239289       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.239352       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! I0612 22:02:30.239881       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0612 15:03:44.194428   13752 command_runner.go:130] ! I0612 22:02:30.242982       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.243157       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.243324       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! I0612 22:02:30.245920       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.246121       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.246235       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! I0612 22:02:30.249402       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.249562       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! I0612 22:02:30.255420       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.255587       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.255759       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! I0612 22:02:30.257021       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.257206       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.257308       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! I0612 22:02:30.269872       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.270105       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.270312       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! I0612 22:02:30.272005       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0612 15:03:44.194428   13752 command_runner.go:130] ! I0612 22:02:30.273608       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.273714       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.273724       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! I0612 22:02:30.277668       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.277779       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0612 15:03:44.194428   13752 command_runner.go:130] ! W0612 22:02:30.277789       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0612 15:03:44.195028   13752 command_runner.go:130] ! I0612 22:02:30.280767       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0612 15:03:44.195028   13752 command_runner.go:130] ! W0612 22:02:30.280916       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:44.195091   13752 command_runner.go:130] ! W0612 22:02:30.280928       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:44.195091   13752 command_runner.go:130] ! I0612 22:02:30.281776       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0612 15:03:44.195091   13752 command_runner.go:130] ! W0612 22:02:30.281806       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:44.195091   13752 command_runner.go:130] ! I0612 22:02:30.296752       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0612 15:03:44.195091   13752 command_runner.go:130] ! W0612 22:02:30.296810       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:44.195091   13752 command_runner.go:130] ! I0612 22:02:30.901606       1 secure_serving.go:213] Serving securely on [::]:8443
	I0612 15:03:44.195199   13752 command_runner.go:130] ! I0612 22:02:30.901766       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0612 15:03:44.195199   13752 command_runner.go:130] ! I0612 22:02:30.903281       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0612 15:03:44.195291   13752 command_runner.go:130] ! I0612 22:02:30.903373       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0612 15:03:44.195291   13752 command_runner.go:130] ! I0612 22:02:30.903401       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0612 15:03:44.195338   13752 command_runner.go:130] ! I0612 22:02:30.903987       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0612 15:03:44.195338   13752 command_runner.go:130] ! I0612 22:02:30.904124       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.904843       1 aggregator.go:163] waiting for initial CRD sync...
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.905095       1 controller.go:78] Starting OpenAPI AggregationController
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.906424       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.901780       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.907108       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.907337       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.901790       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.901800       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.909555       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.909699       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.910003       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.911734       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.911846       1 controller.go:116] Starting legacy_token_tracking_controller
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.911861       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.912590       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.912666       1 available_controller.go:423] Starting AvailableConditionController
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.912673       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.913776       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.953613       1 controller.go:139] Starting OpenAPI controller
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.953929       1 controller.go:87] Starting OpenAPI V3 controller
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.954278       1 naming_controller.go:291] Starting NamingConditionController
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.954516       1 establishing_controller.go:76] Starting EstablishingController
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.954966       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.955230       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:30.955507       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:31.003418       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:31.009966       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:31.010019       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:31.010029       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:31.010400       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:31.011993       1 shared_informer.go:320] Caches are synced for configmaps
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:31.012756       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:31.017182       1 aggregator.go:165] initial CRD sync complete...
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:31.017223       1 autoregister_controller.go:141] Starting autoregister controller
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:31.017231       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:31.017238       1 cache.go:39] Caches are synced for autoregister controller
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:31.018109       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0612 15:03:44.195369   13752 command_runner.go:130] ! I0612 22:02:31.018524       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0612 15:03:44.195954   13752 command_runner.go:130] ! I0612 22:02:31.019519       1 policy_source.go:224] refreshing policies
	I0612 15:03:44.195954   13752 command_runner.go:130] ! I0612 22:02:31.020420       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0612 15:03:44.195954   13752 command_runner.go:130] ! I0612 22:02:31.091331       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0612 15:03:44.196001   13752 command_runner.go:130] ! I0612 22:02:31.909532       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0612 15:03:44.196001   13752 command_runner.go:130] ! W0612 22:02:32.355789       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.23.198.154 172.23.200.184]
	I0612 15:03:44.196001   13752 command_runner.go:130] ! I0612 22:02:32.358485       1 controller.go:615] quota admission added evaluator for: endpoints
	I0612 15:03:44.196001   13752 command_runner.go:130] ! I0612 22:02:32.377254       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0612 15:03:44.196084   13752 command_runner.go:130] ! I0612 22:02:33.727670       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0612 15:03:44.196084   13752 command_runner.go:130] ! I0612 22:02:34.008881       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0612 15:03:44.196118   13752 command_runner.go:130] ! I0612 22:02:34.034607       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0612 15:03:44.196118   13752 command_runner.go:130] ! I0612 22:02:34.157870       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0612 15:03:44.196147   13752 command_runner.go:130] ! I0612 22:02:34.176471       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0612 15:03:44.196147   13752 command_runner.go:130] ! W0612 22:02:52.350035       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.23.200.184]
	I0612 15:03:44.203649   13752 logs.go:123] Gathering logs for kube-proxy [c4842faba751] ...
	I0612 15:03:44.203909   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4842faba751"
	I0612 15:03:44.229657   13752 command_runner.go:130] ! I0612 21:39:47.407607       1 server_linux.go:69] "Using iptables proxy"
	I0612 15:03:44.230143   13752 command_runner.go:130] ! I0612 21:39:47.423801       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.198.154"]
	I0612 15:03:44.230143   13752 command_runner.go:130] ! I0612 21:39:47.480061       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 15:03:44.230143   13752 command_runner.go:130] ! I0612 21:39:47.480182       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 15:03:44.230143   13752 command_runner.go:130] ! I0612 21:39:47.480205       1 server_linux.go:165] "Using iptables Proxier"
	I0612 15:03:44.230143   13752 command_runner.go:130] ! I0612 21:39:47.484521       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 15:03:44.230143   13752 command_runner.go:130] ! I0612 21:39:47.485171       1 server.go:872] "Version info" version="v1.30.1"
	I0612 15:03:44.230143   13752 command_runner.go:130] ! I0612 21:39:47.485535       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:44.230143   13752 command_runner.go:130] ! I0612 21:39:47.488126       1 config.go:192] "Starting service config controller"
	I0612 15:03:44.230143   13752 command_runner.go:130] ! I0612 21:39:47.488162       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 15:03:44.230143   13752 command_runner.go:130] ! I0612 21:39:47.488188       1 config.go:101] "Starting endpoint slice config controller"
	I0612 15:03:44.230143   13752 command_runner.go:130] ! I0612 21:39:47.488197       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 15:03:44.230143   13752 command_runner.go:130] ! I0612 21:39:47.488969       1 config.go:319] "Starting node config controller"
	I0612 15:03:44.230143   13752 command_runner.go:130] ! I0612 21:39:47.489001       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 15:03:44.230143   13752 command_runner.go:130] ! I0612 21:39:47.588500       1 shared_informer.go:320] Caches are synced for service config
	I0612 15:03:44.230143   13752 command_runner.go:130] ! I0612 21:39:47.588641       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0612 15:03:44.230143   13752 command_runner.go:130] ! I0612 21:39:47.589226       1 shared_informer.go:320] Caches are synced for node config
	I0612 15:03:44.232405   13752 logs.go:123] Gathering logs for container status ...
	I0612 15:03:44.232405   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 15:03:44.292648   13752 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0612 15:03:44.294678   13752 command_runner.go:130] > f2a949d407287       8c811b4aec35f                                                                                         8 seconds ago        Running             busybox                   1                   2434f89aefe00       busybox-fc5497c4f-45qqd
	I0612 15:03:44.294678   13752 command_runner.go:130] > 26e5daf354e36       cbb01a7bd410d                                                                                         8 seconds ago        Running             coredns                   1                   986567ef57643       coredns-7db6d8ff4d-vgcxw
	I0612 15:03:44.294678   13752 command_runner.go:130] > 448e057077ddc       6e38f40d628db                                                                                         31 seconds ago       Running             storage-provisioner       2                   5287b61207e62       storage-provisioner
	I0612 15:03:44.294741   13752 command_runner.go:130] > cccfd1e9fef5e       ac1c61439df46                                                                                         About a minute ago   Running             kindnet-cni               1                   a20975d81b350       kindnet-bqlg8
	I0612 15:03:44.294782   13752 command_runner.go:130] > 3546a5c003210       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   5287b61207e62       storage-provisioner
	I0612 15:03:44.294782   13752 command_runner.go:130] > 227a905829b07       747097150317f                                                                                         About a minute ago   Running             kube-proxy                1                   435c56b0fbbbb       kube-proxy-47lr8
	I0612 15:03:44.294834   13752 command_runner.go:130] > 6b61f5f6483d5       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   76517193a960a       etcd-multinode-025000
	I0612 15:03:44.294940   13752 command_runner.go:130] > bbe2d2e51b5f3       91be940803172                                                                                         About a minute ago   Running             kube-apiserver            0                   20cbfb3fb8531       kube-apiserver-multinode-025000
	I0612 15:03:44.294981   13752 command_runner.go:130] > 7acc8ff0a9317       25a1387cdab82                                                                                         About a minute ago   Running             kube-controller-manager   1                   a228f6c30fdf4       kube-controller-manager-multinode-025000
	I0612 15:03:44.295017   13752 command_runner.go:130] > 755750ecd1e39       a52dc94f0a912                                                                                         About a minute ago   Running             kube-scheduler            1                   da184577f0371       kube-scheduler-multinode-025000
	I0612 15:03:44.295068   13752 command_runner.go:130] > bfc0382d49a48       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   84a9b747663ca       busybox-fc5497c4f-45qqd
	I0612 15:03:44.295068   13752 command_runner.go:130] > e83cf4eef49e4       cbb01a7bd410d                                                                                         23 minutes ago       Exited              coredns                   0                   894c58e9fe752       coredns-7db6d8ff4d-vgcxw
	I0612 15:03:44.295154   13752 command_runner.go:130] > 4d60d82f6bc5d       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              23 minutes ago       Exited              kindnet-cni               0                   92f2d5f19e95e       kindnet-bqlg8
	I0612 15:03:44.295191   13752 command_runner.go:130] > c4842faba751e       747097150317f                                                                                         23 minutes ago       Exited              kube-proxy                0                   fad98f611536b       kube-proxy-47lr8
	I0612 15:03:44.295267   13752 command_runner.go:130] > 6b021c195669e       a52dc94f0a912                                                                                         24 minutes ago       Exited              kube-scheduler            0                   d9933fdc9ca72       kube-scheduler-multinode-025000
	I0612 15:03:44.295343   13752 command_runner.go:130] > 685d167da53c9       25a1387cdab82                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   bb4351fab502e       kube-controller-manager-multinode-025000
	I0612 15:03:44.298591   13752 logs.go:123] Gathering logs for etcd [6b61f5f6483d] ...
	I0612 15:03:44.298655   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61f5f6483d"
	I0612 15:03:44.322708   13752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-12T22:02:27.594582Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0612 15:03:44.326186   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.595941Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.23.200.184:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.23.200.184:2380","--initial-cluster=multinode-025000=https://172.23.200.184:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.23.200.184:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.23.200.184:2380","--name=multinode-025000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0612 15:03:44.326186   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.596165Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0612 15:03:44.326343   13752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-12T22:02:27.596271Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0612 15:03:44.326383   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.596356Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.23.200.184:2380"]}
	I0612 15:03:44.326413   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.596492Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0612 15:03:44.326491   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.611167Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.23.200.184:2379"]}
	I0612 15:03:44.326562   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.613093Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-025000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.23.200.184:2380"],"listen-peer-urls":["https://172.23.200.184:2380"],"advertise-client-urls":["https://172.23.200.184:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.23.200.184:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"in
itial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0612 15:03:44.326562   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.643295Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"27.151363ms"}
	I0612 15:03:44.326656   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.674268Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0612 15:03:44.326656   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.702241Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"a7fa2563dcb4b7b8","local-member-id":"b93ef5bd064a9684","commit-index":2039}
	I0612 15:03:44.326742   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.702551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 switched to configuration voters=()"}
	I0612 15:03:44.326742   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.702585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 became follower at term 2"}
	I0612 15:03:44.326742   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.70261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft b93ef5bd064a9684 [peers: [], term: 2, commit: 2039, applied: 0, lastindex: 2039, lastterm: 2]"}
	I0612 15:03:44.326821   13752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-12T22:02:27.719372Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0612 15:03:44.326821   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.724082Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1403}
	I0612 15:03:44.326821   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.735755Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1769}
	I0612 15:03:44.326913   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.743333Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0612 15:03:44.326913   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.753311Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"b93ef5bd064a9684","timeout":"7s"}
	I0612 15:03:44.326913   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.755587Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"b93ef5bd064a9684"}
	I0612 15:03:44.326913   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.755671Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"b93ef5bd064a9684","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0612 15:03:44.326998   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.758078Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0612 15:03:44.326998   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.758939Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0612 15:03:44.326998   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.759011Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0612 15:03:44.327089   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.759115Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0612 15:03:44.327089   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.759495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 switched to configuration voters=(13348376537775904388)"}
	I0612 15:03:44.327089   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.759589Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a7fa2563dcb4b7b8","local-member-id":"b93ef5bd064a9684","added-peer-id":"b93ef5bd064a9684","added-peer-peer-urls":["https://172.23.198.154:2380"]}
	I0612 15:03:44.327089   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.760197Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a7fa2563dcb4b7b8","local-member-id":"b93ef5bd064a9684","cluster-version":"3.5"}
	I0612 15:03:44.327194   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.761198Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0612 15:03:44.327194   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.764395Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0612 15:03:44.327305   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.765492Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b93ef5bd064a9684","initial-advertise-peer-urls":["https://172.23.200.184:2380"],"listen-peer-urls":["https://172.23.200.184:2380"],"advertise-client-urls":["https://172.23.200.184:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.23.200.184:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0612 15:03:44.327381   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.766195Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0612 15:03:44.327381   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.766744Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.23.200.184:2380"}
	I0612 15:03:44.327381   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.767384Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.23.200.184:2380"}
	I0612 15:03:44.327381   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503194Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 is starting a new election at term 2"}
	I0612 15:03:44.327381   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.50332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 became pre-candidate at term 2"}
	I0612 15:03:44.327381   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503351Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 received MsgPreVoteResp from b93ef5bd064a9684 at term 2"}
	I0612 15:03:44.327381   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 became candidate at term 3"}
	I0612 15:03:44.327381   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 received MsgVoteResp from b93ef5bd064a9684 at term 3"}
	I0612 15:03:44.327381   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 became leader at term 3"}
	I0612 15:03:44.327381   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b93ef5bd064a9684 elected leader b93ef5bd064a9684 at term 3"}
	I0612 15:03:44.327381   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.511068Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0612 15:03:44.327381   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.511381Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0612 15:03:44.327381   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.511069Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"b93ef5bd064a9684","local-member-attributes":"{Name:multinode-025000 ClientURLs:[https://172.23.200.184:2379]}","request-path":"/0/members/b93ef5bd064a9684/attributes","cluster-id":"a7fa2563dcb4b7b8","publish-timeout":"7s"}
	I0612 15:03:44.327381   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.512996Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0612 15:03:44.327381   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.513243Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0612 15:03:44.327381   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.514729Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0612 15:03:44.327381   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.515422Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.23.200.184:2379"}
	I0612 15:03:46.838416   13752 api_server.go:253] Checking apiserver healthz at https://172.23.200.184:8443/healthz ...
	I0612 15:03:46.838687   13752 api_server.go:279] https://172.23.200.184:8443/healthz returned 200:
	ok
	I0612 15:03:46.846512   13752 round_trippers.go:463] GET https://172.23.200.184:8443/version
	I0612 15:03:46.846512   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:46.846512   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:46.846512   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:46.846774   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:46.846774   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:46.846774   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:46.846774   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:46.846774   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:46.846774   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:46.846774   13752 round_trippers.go:580]     Content-Length: 263
	I0612 15:03:46.846774   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:46 GMT
	I0612 15:03:46.846774   13752 round_trippers.go:580]     Audit-Id: 8cdbc2a9-51bd-41b7-90d2-8656a07d41d2
	I0612 15:03:46.846774   13752 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0612 15:03:46.846774   13752 api_server.go:141] control plane version: v1.30.1
	I0612 15:03:46.846774   13752 api_server.go:131] duration metric: took 3.6629527s to wait for apiserver health ...
	I0612 15:03:46.846774   13752 system_pods.go:43] waiting for kube-system pods to appear ...
	I0612 15:03:46.848941   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0612 15:03:46.878233   13752 command_runner.go:130] > bbe2d2e51b5f
	I0612 15:03:46.878618   13752 logs.go:276] 1 containers: [bbe2d2e51b5f]
	I0612 15:03:46.888520   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0612 15:03:46.912452   13752 command_runner.go:130] > 6b61f5f6483d
	I0612 15:03:46.912542   13752 logs.go:276] 1 containers: [6b61f5f6483d]
	I0612 15:03:46.921572   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0612 15:03:46.945490   13752 command_runner.go:130] > 26e5daf354e3
	I0612 15:03:46.945558   13752 command_runner.go:130] > e83cf4eef49e
	I0612 15:03:46.945592   13752 logs.go:276] 2 containers: [26e5daf354e3 e83cf4eef49e]
	I0612 15:03:46.954457   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0612 15:03:46.976262   13752 command_runner.go:130] > 755750ecd1e3
	I0612 15:03:46.976262   13752 command_runner.go:130] > 6b021c195669
	I0612 15:03:46.976262   13752 logs.go:276] 2 containers: [755750ecd1e3 6b021c195669]
	I0612 15:03:46.985535   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0612 15:03:47.014567   13752 command_runner.go:130] > 227a905829b0
	I0612 15:03:47.015903   13752 command_runner.go:130] > c4842faba751
	I0612 15:03:47.015903   13752 logs.go:276] 2 containers: [227a905829b0 c4842faba751]
	I0612 15:03:47.025860   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0612 15:03:47.051274   13752 command_runner.go:130] > 7acc8ff0a931
	I0612 15:03:47.051274   13752 command_runner.go:130] > 685d167da53c
	I0612 15:03:47.051348   13752 logs.go:276] 2 containers: [7acc8ff0a931 685d167da53c]
	I0612 15:03:47.063646   13752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0612 15:03:47.092147   13752 command_runner.go:130] > cccfd1e9fef5
	I0612 15:03:47.092147   13752 command_runner.go:130] > 4d60d82f6bc5
	I0612 15:03:47.092147   13752 logs.go:276] 2 containers: [cccfd1e9fef5 4d60d82f6bc5]
	I0612 15:03:47.092147   13752 logs.go:123] Gathering logs for kindnet [4d60d82f6bc5] ...
	I0612 15:03:47.092147   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d60d82f6bc5"
	I0612 15:03:47.120626   13752 command_runner.go:130] ! I0612 21:48:53.982546       1 main.go:227] handling current node
	I0612 15:03:47.120626   13752 command_runner.go:130] ! I0612 21:48:53.982561       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.120626   13752 command_runner.go:130] ! I0612 21:48:53.982568       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.120626   13752 command_runner.go:130] ! I0612 21:48:53.982982       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.120626   13752 command_runner.go:130] ! I0612 21:48:53.983049       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.124269   13752 command_runner.go:130] ! I0612 21:49:03.989649       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.124269   13752 command_runner.go:130] ! I0612 21:49:03.989791       1 main.go:227] handling current node
	I0612 15:03:47.124269   13752 command_runner.go:130] ! I0612 21:49:03.989809       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.125640   13752 command_runner.go:130] ! I0612 21:49:03.989817       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.125640   13752 command_runner.go:130] ! I0612 21:49:03.990195       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.125640   13752 command_runner.go:130] ! I0612 21:49:03.990415       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.125640   13752 command_runner.go:130] ! I0612 21:49:14.000384       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:14.000493       1 main.go:227] handling current node
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:14.000507       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:14.000513       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:14.000627       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:14.000640       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:24.006829       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:24.006871       1 main.go:227] handling current node
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:24.006883       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:24.006889       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:24.007645       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:24.007745       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:34.016679       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:34.016806       1 main.go:227] handling current node
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:34.016838       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:34.016845       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:34.017149       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:34.017279       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:44.025835       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:44.025933       1 main.go:227] handling current node
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:44.025947       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:44.025955       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:44.026381       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:44.026533       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:54.033148       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:54.033257       1 main.go:227] handling current node
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:54.033273       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:54.033281       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:54.033402       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:49:54.033435       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:50:04.046279       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:50:04.046719       1 main.go:227] handling current node
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:50:04.046832       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:50:04.047109       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:50:04.047537       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:50:04.047572       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:50:14.064171       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:50:14.064216       1 main.go:227] handling current node
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:50:14.064230       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:50:14.064236       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:50:14.064574       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:50:14.064665       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.125719   13752 command_runner.go:130] ! I0612 21:50:24.071894       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.126283   13752 command_runner.go:130] ! I0612 21:50:24.071935       1 main.go:227] handling current node
	I0612 15:03:47.126283   13752 command_runner.go:130] ! I0612 21:50:24.071949       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.126283   13752 command_runner.go:130] ! I0612 21:50:24.071955       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.126283   13752 command_runner.go:130] ! I0612 21:50:24.072148       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.126283   13752 command_runner.go:130] ! I0612 21:50:24.072184       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.126283   13752 command_runner.go:130] ! I0612 21:50:34.086428       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.126283   13752 command_runner.go:130] ! I0612 21:50:34.086522       1 main.go:227] handling current node
	I0612 15:03:47.126283   13752 command_runner.go:130] ! I0612 21:50:34.086536       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.126384   13752 command_runner.go:130] ! I0612 21:50:34.086543       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.126384   13752 command_runner.go:130] ! I0612 21:50:34.086690       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.126384   13752 command_runner.go:130] ! I0612 21:50:34.086707       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.126384   13752 command_runner.go:130] ! I0612 21:50:44.093862       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.126384   13752 command_runner.go:130] ! I0612 21:50:44.093905       1 main.go:227] handling current node
	I0612 15:03:47.126384   13752 command_runner.go:130] ! I0612 21:50:44.093919       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.126384   13752 command_runner.go:130] ! I0612 21:50:44.093925       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.126456   13752 command_runner.go:130] ! I0612 21:50:44.094840       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.126456   13752 command_runner.go:130] ! I0612 21:50:44.094916       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.126486   13752 command_runner.go:130] ! I0612 21:50:54.102869       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.126486   13752 command_runner.go:130] ! I0612 21:50:54.103074       1 main.go:227] handling current node
	I0612 15:03:47.126486   13752 command_runner.go:130] ! I0612 21:50:54.103091       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.126486   13752 command_runner.go:130] ! I0612 21:50:54.103100       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.126554   13752 command_runner.go:130] ! I0612 21:50:54.103237       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.126554   13752 command_runner.go:130] ! I0612 21:50:54.103276       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.126554   13752 command_runner.go:130] ! I0612 21:51:04.110391       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.126554   13752 command_runner.go:130] ! I0612 21:51:04.110501       1 main.go:227] handling current node
	I0612 15:03:47.126554   13752 command_runner.go:130] ! I0612 21:51:04.110517       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.126626   13752 command_runner.go:130] ! I0612 21:51:04.110556       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.126626   13752 command_runner.go:130] ! I0612 21:51:04.110721       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.126626   13752 command_runner.go:130] ! I0612 21:51:04.110794       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.126626   13752 command_runner.go:130] ! I0612 21:51:14.121126       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.126626   13752 command_runner.go:130] ! I0612 21:51:14.121263       1 main.go:227] handling current node
	I0612 15:03:47.126692   13752 command_runner.go:130] ! I0612 21:51:14.121280       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.126692   13752 command_runner.go:130] ! I0612 21:51:14.121288       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.126692   13752 command_runner.go:130] ! I0612 21:51:14.121430       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.126692   13752 command_runner.go:130] ! I0612 21:51:14.121462       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.126758   13752 command_runner.go:130] ! I0612 21:51:24.131659       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.126758   13752 command_runner.go:130] ! I0612 21:51:24.131690       1 main.go:227] handling current node
	I0612 15:03:47.126758   13752 command_runner.go:130] ! I0612 21:51:24.131702       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.126758   13752 command_runner.go:130] ! I0612 21:51:24.131708       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.126823   13752 command_runner.go:130] ! I0612 21:51:24.132287       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.126880   13752 command_runner.go:130] ! I0612 21:51:24.132319       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.126880   13752 command_runner.go:130] ! I0612 21:51:34.139419       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.126920   13752 command_runner.go:130] ! I0612 21:51:34.139546       1 main.go:227] handling current node
	I0612 15:03:47.126920   13752 command_runner.go:130] ! I0612 21:51:34.139561       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.126920   13752 command_runner.go:130] ! I0612 21:51:34.139570       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.126920   13752 command_runner.go:130] ! I0612 21:51:34.140149       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.126920   13752 command_runner.go:130] ! I0612 21:51:34.140253       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.126920   13752 command_runner.go:130] ! I0612 21:51:44.152295       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.126997   13752 command_runner.go:130] ! I0612 21:51:44.152430       1 main.go:227] handling current node
	I0612 15:03:47.126997   13752 command_runner.go:130] ! I0612 21:51:44.152464       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.126997   13752 command_runner.go:130] ! I0612 21:51:44.152471       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.127062   13752 command_runner.go:130] ! I0612 21:51:44.153262       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.127086   13752 command_runner.go:130] ! I0612 21:51:44.153471       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.127117   13752 command_runner.go:130] ! I0612 21:51:54.160684       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.127117   13752 command_runner.go:130] ! I0612 21:51:54.160938       1 main.go:227] handling current node
	I0612 15:03:47.127117   13752 command_runner.go:130] ! I0612 21:51:54.160953       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.127157   13752 command_runner.go:130] ! I0612 21:51:54.160960       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.127157   13752 command_runner.go:130] ! I0612 21:51:54.161457       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.127157   13752 command_runner.go:130] ! I0612 21:51:54.161482       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.127157   13752 command_runner.go:130] ! I0612 21:52:04.170421       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.127157   13752 command_runner.go:130] ! I0612 21:52:04.170526       1 main.go:227] handling current node
	I0612 15:03:47.127157   13752 command_runner.go:130] ! I0612 21:52:04.170541       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.127272   13752 command_runner.go:130] ! I0612 21:52:04.170548       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.127272   13752 command_runner.go:130] ! I0612 21:52:04.171076       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.127303   13752 command_runner.go:130] ! I0612 21:52:04.171113       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.127341   13752 command_runner.go:130] ! I0612 21:52:14.180403       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.127341   13752 command_runner.go:130] ! I0612 21:52:14.180490       1 main.go:227] handling current node
	I0612 15:03:47.127341   13752 command_runner.go:130] ! I0612 21:52:14.180508       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.127341   13752 command_runner.go:130] ! I0612 21:52:14.180516       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.127341   13752 command_runner.go:130] ! I0612 21:52:14.180994       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.127415   13752 command_runner.go:130] ! I0612 21:52:14.181032       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.127415   13752 command_runner.go:130] ! I0612 21:52:24.195314       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.127439   13752 command_runner.go:130] ! I0612 21:52:24.195545       1 main.go:227] handling current node
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:24.195735       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:24.195807       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:24.196026       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:24.196064       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:34.202013       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:34.202806       1 main.go:227] handling current node
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:34.202932       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:34.203029       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:34.203265       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:34.203299       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:44.209271       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:44.209440       1 main.go:227] handling current node
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:44.209476       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:44.209546       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:44.209839       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:44.210283       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:54.223351       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:54.223443       1 main.go:227] handling current node
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:54.223459       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:54.223466       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:54.223810       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:52:54.223840       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:53:04.236876       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:53:04.237155       1 main.go:227] handling current node
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:53:04.237949       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:53:04.238341       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:53:04.238673       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:53:04.238707       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:53:14.245069       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:53:14.245110       1 main.go:227] handling current node
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:53:14.245122       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:53:14.245131       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:53:14.245834       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:53:14.245932       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:53:24.258923       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:53:24.258965       1 main.go:227] handling current node
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:53:24.258977       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:53:24.258983       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.127469   13752 command_runner.go:130] ! I0612 21:53:24.259367       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.128000   13752 command_runner.go:130] ! I0612 21:53:24.259399       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.128000   13752 command_runner.go:130] ! I0612 21:53:34.265573       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.128000   13752 command_runner.go:130] ! I0612 21:53:34.265738       1 main.go:227] handling current node
	I0612 15:03:47.128000   13752 command_runner.go:130] ! I0612 21:53:34.265787       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.128000   13752 command_runner.go:130] ! I0612 21:53:34.265797       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.128097   13752 command_runner.go:130] ! I0612 21:53:34.266180       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.128097   13752 command_runner.go:130] ! I0612 21:53:34.266257       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.128097   13752 command_runner.go:130] ! I0612 21:53:44.278968       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.128097   13752 command_runner.go:130] ! I0612 21:53:44.279173       1 main.go:227] handling current node
	I0612 15:03:47.128158   13752 command_runner.go:130] ! I0612 21:53:44.279207       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.128158   13752 command_runner.go:130] ! I0612 21:53:44.279294       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.128158   13752 command_runner.go:130] ! I0612 21:53:44.279698       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.128158   13752 command_runner.go:130] ! I0612 21:53:44.279829       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.128158   13752 command_runner.go:130] ! I0612 21:53:54.290366       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.128158   13752 command_runner.go:130] ! I0612 21:53:54.290472       1 main.go:227] handling current node
	I0612 15:03:47.128158   13752 command_runner.go:130] ! I0612 21:53:54.290487       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.128248   13752 command_runner.go:130] ! I0612 21:53:54.290494       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.128248   13752 command_runner.go:130] ! I0612 21:53:54.291158       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.128248   13752 command_runner.go:130] ! I0612 21:53:54.291263       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.128248   13752 command_runner.go:130] ! I0612 21:54:04.308014       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.128316   13752 command_runner.go:130] ! I0612 21:54:04.308117       1 main.go:227] handling current node
	I0612 15:03:47.128316   13752 command_runner.go:130] ! I0612 21:54:04.308133       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.128316   13752 command_runner.go:130] ! I0612 21:54:04.308142       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.128388   13752 command_runner.go:130] ! I0612 21:54:04.308605       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.128414   13752 command_runner.go:130] ! I0612 21:54:04.308643       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.128414   13752 command_runner.go:130] ! I0612 21:54:14.316271       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.128446   13752 command_runner.go:130] ! I0612 21:54:14.316380       1 main.go:227] handling current node
	I0612 15:03:47.128473   13752 command_runner.go:130] ! I0612 21:54:14.316396       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.128473   13752 command_runner.go:130] ! I0612 21:54:14.316403       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.128538   13752 command_runner.go:130] ! I0612 21:54:14.316942       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.128603   13752 command_runner.go:130] ! I0612 21:54:14.316959       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.128603   13752 command_runner.go:130] ! I0612 21:54:24.330853       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.128668   13752 command_runner.go:130] ! I0612 21:54:24.331009       1 main.go:227] handling current node
	I0612 15:03:47.128694   13752 command_runner.go:130] ! I0612 21:54:24.331025       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.128720   13752 command_runner.go:130] ! I0612 21:54:24.331033       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.128720   13752 command_runner.go:130] ! I0612 21:54:24.331178       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.128748   13752 command_runner.go:130] ! I0612 21:54:24.331213       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.128783   13752 command_runner.go:130] ! I0612 21:54:34.340396       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.128783   13752 command_runner.go:130] ! I0612 21:54:34.340543       1 main.go:227] handling current node
	I0612 15:03:47.128783   13752 command_runner.go:130] ! I0612 21:54:34.340558       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.128836   13752 command_runner.go:130] ! I0612 21:54:34.340565       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.128885   13752 command_runner.go:130] ! I0612 21:54:34.340924       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.128910   13752 command_runner.go:130] ! I0612 21:54:34.341013       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.128910   13752 command_runner.go:130] ! I0612 21:54:44.347468       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.128910   13752 command_runner.go:130] ! I0612 21:54:44.347599       1 main.go:227] handling current node
	I0612 15:03:47.128974   13752 command_runner.go:130] ! I0612 21:54:44.347614       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.129014   13752 command_runner.go:130] ! I0612 21:54:44.347622       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:54:44.348279       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:54:44.348396       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:54:54.364900       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:54:54.365031       1 main.go:227] handling current node
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:54:54.365046       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:54:54.365054       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:54:54.365542       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:54:54.365727       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:04.381041       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:04.381087       1 main.go:227] handling current node
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:04.381103       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:04.381110       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:04.381700       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:04.381853       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:14.395619       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:14.395666       1 main.go:227] handling current node
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:14.395679       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:14.395686       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:14.396514       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:14.396536       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:24.411927       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:24.412012       1 main.go:227] handling current node
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:24.412028       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:24.412036       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:24.412568       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:24.412661       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:34.420011       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:34.420100       1 main.go:227] handling current node
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:34.420115       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:34.420122       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:34.420481       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:34.420570       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:44.432502       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:44.432604       1 main.go:227] handling current node
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:44.432620       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.129044   13752 command_runner.go:130] ! I0612 21:55:44.432632       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.129576   13752 command_runner.go:130] ! I0612 21:55:44.432881       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.129576   13752 command_runner.go:130] ! I0612 21:55:44.433061       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.129576   13752 command_runner.go:130] ! I0612 21:55:54.446991       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.129576   13752 command_runner.go:130] ! I0612 21:55:54.447440       1 main.go:227] handling current node
	I0612 15:03:47.129576   13752 command_runner.go:130] ! I0612 21:55:54.447622       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.129576   13752 command_runner.go:130] ! I0612 21:55:54.447655       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.129758   13752 command_runner.go:130] ! I0612 21:55:54.447830       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.129758   13752 command_runner.go:130] ! I0612 21:55:54.447901       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.129758   13752 command_runner.go:130] ! I0612 21:56:04.463393       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.129758   13752 command_runner.go:130] ! I0612 21:56:04.463546       1 main.go:227] handling current node
	I0612 15:03:47.129758   13752 command_runner.go:130] ! I0612 21:56:04.463575       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.129818   13752 command_runner.go:130] ! I0612 21:56:04.463596       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.129841   13752 command_runner.go:130] ! I0612 21:56:04.463900       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.129841   13752 command_runner.go:130] ! I0612 21:56:04.463932       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:14.477690       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:14.477837       1 main.go:227] handling current node
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:14.477852       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:14.477860       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:14.478029       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:14.478096       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:24.485525       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:24.485620       1 main.go:227] handling current node
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:24.485655       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:24.485663       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:24.486202       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:24.486237       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:34.502904       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:34.502951       1 main.go:227] handling current node
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:34.502964       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:34.502970       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:34.503088       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:34.503684       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:44.512292       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:44.512356       1 main.go:227] handling current node
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:44.512368       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:44.512374       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:44.512909       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:44.513033       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:54.520903       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:54.521017       1 main.go:227] handling current node
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:54.521034       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:54.521041       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:54.521441       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:56:54.521665       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:57:04.535531       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:57:04.535625       1 main.go:227] handling current node
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:57:04.535665       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:57:04.535672       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:57:04.536272       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:57:04.536355       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:57:14.559304       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:57:14.559354       1 main.go:227] handling current node
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:57:14.559375       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:57:14.559382       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:57:14.559735       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:57:14.560332       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:57:24.568057       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:57:24.568103       1 main.go:227] handling current node
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:57:24.568116       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.129869   13752 command_runner.go:130] ! I0612 21:57:24.568122       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.130402   13752 command_runner.go:130] ! I0612 21:57:24.568938       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.130402   13752 command_runner.go:130] ! I0612 21:57:24.569042       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.130402   13752 command_runner.go:130] ! I0612 21:57:34.584121       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.130402   13752 command_runner.go:130] ! I0612 21:57:34.584277       1 main.go:227] handling current node
	I0612 15:03:47.130402   13752 command_runner.go:130] ! I0612 21:57:34.584502       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.130402   13752 command_runner.go:130] ! I0612 21:57:34.584607       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.130402   13752 command_runner.go:130] ! I0612 21:57:34.584995       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.130402   13752 command_runner.go:130] ! I0612 21:57:34.585095       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.130402   13752 command_runner.go:130] ! I0612 21:57:44.600201       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.130402   13752 command_runner.go:130] ! I0612 21:57:44.600339       1 main.go:227] handling current node
	I0612 15:03:47.130402   13752 command_runner.go:130] ! I0612 21:57:44.600353       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.130402   13752 command_runner.go:130] ! I0612 21:57:44.600361       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.130557   13752 command_runner.go:130] ! I0612 21:57:44.600842       1 main.go:223] Handling node with IPs: map[172.23.206.201:{}]
	I0612 15:03:47.130557   13752 command_runner.go:130] ! I0612 21:57:44.600859       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.2.0/24] 
	I0612 15:03:47.130557   13752 command_runner.go:130] ! I0612 21:57:54.615436       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.130557   13752 command_runner.go:130] ! I0612 21:57:54.615497       1 main.go:227] handling current node
	I0612 15:03:47.130557   13752 command_runner.go:130] ! I0612 21:57:54.615511       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.130624   13752 command_runner.go:130] ! I0612 21:57:54.615536       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.130639   13752 command_runner.go:130] ! I0612 21:58:04.629487       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.130639   13752 command_runner.go:130] ! I0612 21:58:04.629657       1 main.go:227] handling current node
	I0612 15:03:47.130639   13752 command_runner.go:130] ! I0612 21:58:04.629797       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.130639   13752 command_runner.go:130] ! I0612 21:58:04.629891       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.130701   13752 command_runner.go:130] ! I0612 21:58:04.630131       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:47.130701   13752 command_runner.go:130] ! I0612 21:58:04.631059       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:47.130994   13752 command_runner.go:130] ! I0612 21:58:04.631221       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.23.206.72 Flags: [] Table: 0} 
	I0612 15:03:47.130994   13752 command_runner.go:130] ! I0612 21:58:14.647500       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.130994   13752 command_runner.go:130] ! I0612 21:58:14.647527       1 main.go:227] handling current node
	I0612 15:03:47.130994   13752 command_runner.go:130] ! I0612 21:58:14.647539       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.131063   13752 command_runner.go:130] ! I0612 21:58:14.647544       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.131063   13752 command_runner.go:130] ! I0612 21:58:14.647661       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:47.131063   13752 command_runner.go:130] ! I0612 21:58:14.647672       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:47.131063   13752 command_runner.go:130] ! I0612 21:58:24.655905       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.131063   13752 command_runner.go:130] ! I0612 21:58:24.656017       1 main.go:227] handling current node
	I0612 15:03:47.131063   13752 command_runner.go:130] ! I0612 21:58:24.656064       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.131153   13752 command_runner.go:130] ! I0612 21:58:24.656140       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.131153   13752 command_runner.go:130] ! I0612 21:58:24.656636       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:47.131153   13752 command_runner.go:130] ! I0612 21:58:24.656721       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:47.131153   13752 command_runner.go:130] ! I0612 21:58:34.670254       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.131153   13752 command_runner.go:130] ! I0612 21:58:34.670590       1 main.go:227] handling current node
	I0612 15:03:47.131153   13752 command_runner.go:130] ! I0612 21:58:34.670966       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.131246   13752 command_runner.go:130] ! I0612 21:58:34.671845       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.131270   13752 command_runner.go:130] ! I0612 21:58:34.672269       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:47.131270   13752 command_runner.go:130] ! I0612 21:58:34.672369       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:47.131270   13752 command_runner.go:130] ! I0612 21:58:44.682684       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:58:44.682854       1 main.go:227] handling current node
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:58:44.682877       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:58:44.682887       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:58:44.683737       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:58:44.683808       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:58:54.691077       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:58:54.691167       1 main.go:227] handling current node
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:58:54.691199       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:58:54.691207       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:58:54.691344       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:58:54.691357       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:04.700863       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:04.701017       1 main.go:227] handling current node
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:04.701032       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:04.701040       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:04.701620       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:04.701736       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:14.717668       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:14.717949       1 main.go:227] handling current node
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:14.717991       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:14.718050       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:14.718200       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:14.718263       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:24.724311       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:24.724441       1 main.go:227] handling current node
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:24.724456       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:24.724464       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:24.724785       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:24.724853       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:34.737266       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:34.737410       1 main.go:227] handling current node
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:34.737425       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:34.737432       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:34.738157       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:34.738269       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:44.746123       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.131297   13752 command_runner.go:130] ! I0612 21:59:44.746292       1 main.go:227] handling current node
	I0612 15:03:47.131834   13752 command_runner.go:130] ! I0612 21:59:44.746313       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.131834   13752 command_runner.go:130] ! I0612 21:59:44.746332       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.131834   13752 command_runner.go:130] ! I0612 21:59:44.746856       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:47.131834   13752 command_runner.go:130] ! I0612 21:59:44.746925       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:47.131834   13752 command_runner.go:130] ! I0612 21:59:54.752611       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 15:03:47.131834   13752 command_runner.go:130] ! I0612 21:59:54.752658       1 main.go:227] handling current node
	I0612 15:03:47.131834   13752 command_runner.go:130] ! I0612 21:59:54.752671       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.131834   13752 command_runner.go:130] ! I0612 21:59:54.752678       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.131834   13752 command_runner.go:130] ! I0612 21:59:54.753183       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:47.131834   13752 command_runner.go:130] ! I0612 21:59:54.753277       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:47.150134   13752 logs.go:123] Gathering logs for kube-apiserver [bbe2d2e51b5f] ...
	I0612 15:03:47.150134   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbe2d2e51b5f"
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:28.032945       1 options.go:221] external host was not specified, using 172.23.200.184
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:28.036290       1 server.go:148] Version: v1.30.1
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:28.036339       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:28.916544       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:28.917947       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:28.921952       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:28.922146       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:28.922426       1 instance.go:299] Using reconciler: lease
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:29.570201       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0612 15:03:47.178282   13752 command_runner.go:130] ! W0612 22:02:29.570355       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:29.801222       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:29.801702       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:30.046166       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:30.216981       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:30.231997       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0612 15:03:47.178282   13752 command_runner.go:130] ! W0612 22:02:30.232097       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:47.178282   13752 command_runner.go:130] ! W0612 22:02:30.232107       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:30.232792       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0612 15:03:47.178282   13752 command_runner.go:130] ! W0612 22:02:30.232881       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:30.233864       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:30.235099       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0612 15:03:47.178282   13752 command_runner.go:130] ! W0612 22:02:30.235211       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0612 15:03:47.178282   13752 command_runner.go:130] ! W0612 22:02:30.235220       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0612 15:03:47.178282   13752 command_runner.go:130] ! I0612 22:02:30.237278       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0612 15:03:47.179806   13752 command_runner.go:130] ! W0612 22:02:30.237314       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0612 15:03:47.179806   13752 command_runner.go:130] ! I0612 22:02:30.238451       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0612 15:03:47.179806   13752 command_runner.go:130] ! W0612 22:02:30.238555       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:47.179806   13752 command_runner.go:130] ! W0612 22:02:30.238564       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:47.179806   13752 command_runner.go:130] ! I0612 22:02:30.239199       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0612 15:03:47.179806   13752 command_runner.go:130] ! W0612 22:02:30.239289       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:47.179806   13752 command_runner.go:130] ! W0612 22:02:30.239352       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:47.179994   13752 command_runner.go:130] ! I0612 22:02:30.239881       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0612 15:03:47.180018   13752 command_runner.go:130] ! I0612 22:02:30.242982       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0612 15:03:47.180018   13752 command_runner.go:130] ! W0612 22:02:30.243157       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:47.180102   13752 command_runner.go:130] ! W0612 22:02:30.243324       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:47.180102   13752 command_runner.go:130] ! I0612 22:02:30.245920       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0612 15:03:47.180102   13752 command_runner.go:130] ! W0612 22:02:30.246121       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:47.180138   13752 command_runner.go:130] ! W0612 22:02:30.246235       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:47.180138   13752 command_runner.go:130] ! I0612 22:02:30.249402       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0612 15:03:47.180211   13752 command_runner.go:130] ! W0612 22:02:30.249562       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0612 15:03:47.180211   13752 command_runner.go:130] ! I0612 22:02:30.255420       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0612 15:03:47.180211   13752 command_runner.go:130] ! W0612 22:02:30.255587       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:47.180211   13752 command_runner.go:130] ! W0612 22:02:30.255759       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:47.180211   13752 command_runner.go:130] ! I0612 22:02:30.257021       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0612 15:03:47.180211   13752 command_runner.go:130] ! W0612 22:02:30.257206       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:47.180211   13752 command_runner.go:130] ! W0612 22:02:30.257308       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:47.180211   13752 command_runner.go:130] ! I0612 22:02:30.269872       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0612 15:03:47.180211   13752 command_runner.go:130] ! W0612 22:02:30.270105       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:47.180211   13752 command_runner.go:130] ! W0612 22:02:30.270312       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:47.180211   13752 command_runner.go:130] ! I0612 22:02:30.272005       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0612 15:03:47.180211   13752 command_runner.go:130] ! I0612 22:02:30.273608       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0612 15:03:47.180211   13752 command_runner.go:130] ! W0612 22:02:30.273714       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0612 15:03:47.180211   13752 command_runner.go:130] ! W0612 22:02:30.273724       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:47.180211   13752 command_runner.go:130] ! I0612 22:02:30.277668       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0612 15:03:47.180211   13752 command_runner.go:130] ! W0612 22:02:30.277779       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0612 15:03:47.180211   13752 command_runner.go:130] ! W0612 22:02:30.277789       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0612 15:03:47.180211   13752 command_runner.go:130] ! I0612 22:02:30.280767       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0612 15:03:47.180211   13752 command_runner.go:130] ! W0612 22:02:30.280916       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:47.180211   13752 command_runner.go:130] ! W0612 22:02:30.280928       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0612 15:03:47.180211   13752 command_runner.go:130] ! I0612 22:02:30.281776       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0612 15:03:47.180211   13752 command_runner.go:130] ! W0612 22:02:30.281806       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:47.180211   13752 command_runner.go:130] ! I0612 22:02:30.296752       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0612 15:03:47.180211   13752 command_runner.go:130] ! W0612 22:02:30.296810       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0612 15:03:47.180211   13752 command_runner.go:130] ! I0612 22:02:30.901606       1 secure_serving.go:213] Serving securely on [::]:8443
	I0612 15:03:47.180211   13752 command_runner.go:130] ! I0612 22:02:30.901766       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0612 15:03:47.180211   13752 command_runner.go:130] ! I0612 22:02:30.903281       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0612 15:03:47.180211   13752 command_runner.go:130] ! I0612 22:02:30.903373       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0612 15:03:47.180211   13752 command_runner.go:130] ! I0612 22:02:30.903401       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0612 15:03:47.180739   13752 command_runner.go:130] ! I0612 22:02:30.903987       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0612 15:03:47.180739   13752 command_runner.go:130] ! I0612 22:02:30.904124       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0612 15:03:47.180739   13752 command_runner.go:130] ! I0612 22:02:30.904843       1 aggregator.go:163] waiting for initial CRD sync...
	I0612 15:03:47.180739   13752 command_runner.go:130] ! I0612 22:02:30.905095       1 controller.go:78] Starting OpenAPI AggregationController
	I0612 15:03:47.180739   13752 command_runner.go:130] ! I0612 22:02:30.906424       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0612 15:03:47.180739   13752 command_runner.go:130] ! I0612 22:02:30.901780       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0612 15:03:47.180739   13752 command_runner.go:130] ! I0612 22:02:30.907108       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0612 15:03:47.180739   13752 command_runner.go:130] ! I0612 22:02:30.907337       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0612 15:03:47.180922   13752 command_runner.go:130] ! I0612 22:02:30.901790       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0612 15:03:47.180922   13752 command_runner.go:130] ! I0612 22:02:30.901800       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 15:03:47.180922   13752 command_runner.go:130] ! I0612 22:02:30.909555       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0612 15:03:47.180922   13752 command_runner.go:130] ! I0612 22:02:30.909699       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0612 15:03:47.180990   13752 command_runner.go:130] ! I0612 22:02:30.910003       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0612 15:03:47.180990   13752 command_runner.go:130] ! I0612 22:02:30.911734       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0612 15:03:47.181024   13752 command_runner.go:130] ! I0612 22:02:30.911846       1 controller.go:116] Starting legacy_token_tracking_controller
	I0612 15:03:47.181024   13752 command_runner.go:130] ! I0612 22:02:30.911861       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0612 15:03:47.181024   13752 command_runner.go:130] ! I0612 22:02:30.912590       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0612 15:03:47.181067   13752 command_runner.go:130] ! I0612 22:02:30.912666       1 available_controller.go:423] Starting AvailableConditionController
	I0612 15:03:47.181067   13752 command_runner.go:130] ! I0612 22:02:30.912673       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0612 15:03:47.181067   13752 command_runner.go:130] ! I0612 22:02:30.913776       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0612 15:03:47.181144   13752 command_runner.go:130] ! I0612 22:02:30.953613       1 controller.go:139] Starting OpenAPI controller
	I0612 15:03:47.181144   13752 command_runner.go:130] ! I0612 22:02:30.953929       1 controller.go:87] Starting OpenAPI V3 controller
	I0612 15:03:47.181144   13752 command_runner.go:130] ! I0612 22:02:30.954278       1 naming_controller.go:291] Starting NamingConditionController
	I0612 15:03:47.181144   13752 command_runner.go:130] ! I0612 22:02:30.954516       1 establishing_controller.go:76] Starting EstablishingController
	I0612 15:03:47.181206   13752 command_runner.go:130] ! I0612 22:02:30.954966       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0612 15:03:47.181206   13752 command_runner.go:130] ! I0612 22:02:30.955230       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0612 15:03:47.181258   13752 command_runner.go:130] ! I0612 22:02:30.955507       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0612 15:03:47.181258   13752 command_runner.go:130] ! I0612 22:02:31.003418       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0612 15:03:47.181258   13752 command_runner.go:130] ! I0612 22:02:31.009966       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0612 15:03:47.181315   13752 command_runner.go:130] ! I0612 22:02:31.010019       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0612 15:03:47.181315   13752 command_runner.go:130] ! I0612 22:02:31.010029       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0612 15:03:47.181315   13752 command_runner.go:130] ! I0612 22:02:31.010400       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0612 15:03:47.181399   13752 command_runner.go:130] ! I0612 22:02:31.011993       1 shared_informer.go:320] Caches are synced for configmaps
	I0612 15:03:47.181399   13752 command_runner.go:130] ! I0612 22:02:31.012756       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0612 15:03:47.181399   13752 command_runner.go:130] ! I0612 22:02:31.017182       1 aggregator.go:165] initial CRD sync complete...
	I0612 15:03:47.181399   13752 command_runner.go:130] ! I0612 22:02:31.017223       1 autoregister_controller.go:141] Starting autoregister controller
	I0612 15:03:47.181399   13752 command_runner.go:130] ! I0612 22:02:31.017231       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0612 15:03:47.181399   13752 command_runner.go:130] ! I0612 22:02:31.017238       1 cache.go:39] Caches are synced for autoregister controller
	I0612 15:03:47.181476   13752 command_runner.go:130] ! I0612 22:02:31.018109       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0612 15:03:47.181506   13752 command_runner.go:130] ! I0612 22:02:31.018524       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0612 15:03:47.181506   13752 command_runner.go:130] ! I0612 22:02:31.019519       1 policy_source.go:224] refreshing policies
	I0612 15:03:47.181506   13752 command_runner.go:130] ! I0612 22:02:31.020420       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0612 15:03:47.181506   13752 command_runner.go:130] ! I0612 22:02:31.091331       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0612 15:03:47.181588   13752 command_runner.go:130] ! I0612 22:02:31.909532       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0612 15:03:47.181588   13752 command_runner.go:130] ! W0612 22:02:32.355789       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.23.198.154 172.23.200.184]
	I0612 15:03:47.181588   13752 command_runner.go:130] ! I0612 22:02:32.358485       1 controller.go:615] quota admission added evaluator for: endpoints
	I0612 15:03:47.181588   13752 command_runner.go:130] ! I0612 22:02:32.377254       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0612 15:03:47.181588   13752 command_runner.go:130] ! I0612 22:02:33.727670       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0612 15:03:47.181588   13752 command_runner.go:130] ! I0612 22:02:34.008881       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0612 15:03:47.181652   13752 command_runner.go:130] ! I0612 22:02:34.034607       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0612 15:03:47.181677   13752 command_runner.go:130] ! I0612 22:02:34.157870       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0612 15:03:47.181703   13752 command_runner.go:130] ! I0612 22:02:34.176471       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0612 15:03:47.181703   13752 command_runner.go:130] ! W0612 22:02:52.350035       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.23.200.184]
	I0612 15:03:47.189065   13752 logs.go:123] Gathering logs for kube-controller-manager [7acc8ff0a931] ...
	I0612 15:03:47.189065   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7acc8ff0a931"
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:28.579013       1 serving.go:380] Generated self-signed cert in-memory
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:28.927149       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:28.927184       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:28.930688       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:28.932993       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:28.933167       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:28.933539       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:32.987820       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:32.988653       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:32.994458       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:32.995780       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:32.996873       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:33.005703       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:33.005720       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:33.006099       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:33.006120       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:33.011328       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:33.013199       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:33.013216       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0612 15:03:47.214369   13752 command_runner.go:130] ! W0612 22:02:33.045760       1 shared_informer.go:597] resyncPeriod 19h21m1.650821539s is smaller than resyncCheckPeriod 23h18m38.368150047s and the informer has already started. Changing it to 23h18m38.368150047s
	I0612 15:03:47.214369   13752 command_runner.go:130] ! I0612 22:02:33.046400       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0612 15:03:47.215082   13752 command_runner.go:130] ! I0612 22:02:33.046742       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0612 15:03:47.215174   13752 command_runner.go:130] ! I0612 22:02:33.047003       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0612 15:03:47.215174   13752 command_runner.go:130] ! I0612 22:02:33.047066       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0612 15:03:47.215174   13752 command_runner.go:130] ! I0612 22:02:33.047091       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0612 15:03:47.215225   13752 command_runner.go:130] ! I0612 22:02:33.047150       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0612 15:03:47.215225   13752 command_runner.go:130] ! I0612 22:02:33.047175       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0612 15:03:47.215279   13752 command_runner.go:130] ! I0612 22:02:33.047875       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0612 15:03:47.215321   13752 command_runner.go:130] ! I0612 22:02:33.048961       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.049070       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.049108       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.049132       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.049173       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.049188       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.049203       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.049218       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.049235       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.049307       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! W0612 22:02:33.049318       1 shared_informer.go:597] resyncPeriod 16h27m54.164006095s is smaller than resyncCheckPeriod 23h18m38.368150047s and the informer has already started. Changing it to 23h18m38.368150047s
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.049536       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.049616       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.049652       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.049852       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.049880       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.052188       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.075270       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.088124       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.088224       1 shared_informer.go:313] Waiting for caches to sync for job
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.088312       1 shared_informer.go:320] Caches are synced for tokens
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.092469       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.093016       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.093183       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.099173       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.099288       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0612 15:03:47.215378   13752 command_runner.go:130] ! I0612 22:02:33.099302       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0612 15:03:47.215967   13752 command_runner.go:130] ! I0612 22:02:33.099269       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0612 15:03:47.215967   13752 command_runner.go:130] ! I0612 22:02:33.099467       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0612 15:03:47.215967   13752 command_runner.go:130] ! I0612 22:02:33.102279       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0612 15:03:47.216013   13752 command_runner.go:130] ! I0612 22:02:33.103692       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0612 15:03:47.216140   13752 command_runner.go:130] ! I0612 22:02:33.103797       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.109335       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.109737       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.109801       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.109811       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.113018       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.114442       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.114573       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.118932       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.118955       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.118979       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.119791       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.121411       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.119985       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.122332       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.122409       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.122432       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.122572       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.122710       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.122722       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.122748       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.132412       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.132517       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.132620       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.132660       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.132669       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0612 15:03:47.216181   13752 command_runner.go:130] ! I0612 22:02:33.139478       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0612 15:03:47.216707   13752 command_runner.go:130] ! I0612 22:02:33.139854       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0612 15:03:47.216707   13752 command_runner.go:130] ! I0612 22:02:33.140261       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0612 15:03:47.216707   13752 command_runner.go:130] ! I0612 22:02:33.169621       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0612 15:03:47.216707   13752 command_runner.go:130] ! I0612 22:02:33.169819       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0612 15:03:47.216707   13752 command_runner.go:130] ! I0612 22:02:33.169849       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0612 15:03:47.216707   13752 command_runner.go:130] ! I0612 22:02:33.170074       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0612 15:03:47.216707   13752 command_runner.go:130] ! I0612 22:02:33.173816       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0612 15:03:47.216851   13752 command_runner.go:130] ! I0612 22:02:33.174120       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0612 15:03:47.216875   13752 command_runner.go:130] ! I0612 22:02:33.174130       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0612 15:03:47.216875   13752 command_runner.go:130] ! I0612 22:02:33.184678       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0612 15:03:47.216935   13752 command_runner.go:130] ! I0612 22:02:33.186030       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0612 15:03:47.216935   13752 command_runner.go:130] ! I0612 22:02:33.192152       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0612 15:03:47.216977   13752 command_runner.go:130] ! I0612 22:02:33.192257       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0612 15:03:47.217055   13752 command_runner.go:130] ! I0612 22:02:33.192268       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0612 15:03:47.217055   13752 command_runner.go:130] ! I0612 22:02:33.194361       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0612 15:03:47.217080   13752 command_runner.go:130] ! I0612 22:02:33.194659       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.194671       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.200378       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.200552       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.200579       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.203400       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.203797       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.203967       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.207566       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.207732       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.207743       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.207766       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.214389       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.214572       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.214655       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.220603       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.221181       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.222958       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0612 15:03:47.217110   13752 command_runner.go:130] ! E0612 22:02:33.228603       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.228994       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.253059       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.253281       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.253292       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.264081       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.266480       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.266606       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0612 15:03:47.217110   13752 command_runner.go:130] ! I0612 22:02:33.266742       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0612 15:03:47.217638   13752 command_runner.go:130] ! I0612 22:02:33.380173       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0612 15:03:47.217638   13752 command_runner.go:130] ! I0612 22:02:33.380458       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0612 15:03:47.217638   13752 command_runner.go:130] ! I0612 22:02:33.380796       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0612 15:03:47.217638   13752 command_runner.go:130] ! I0612 22:02:33.398346       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0612 15:03:47.217638   13752 command_runner.go:130] ! I0612 22:02:33.401718       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0612 15:03:47.217638   13752 command_runner.go:130] ! I0612 22:02:33.401737       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0612 15:03:47.217638   13752 command_runner.go:130] ! I0612 22:02:33.495874       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0612 15:03:47.217638   13752 command_runner.go:130] ! I0612 22:02:33.496386       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0612 15:03:47.217638   13752 command_runner.go:130] ! I0612 22:02:33.498064       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0612 15:03:47.217638   13752 command_runner.go:130] ! I0612 22:02:33.698817       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0612 15:03:47.217838   13752 command_runner.go:130] ! I0612 22:02:33.699215       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0612 15:03:47.217838   13752 command_runner.go:130] ! I0612 22:02:33.699646       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0612 15:03:47.217838   13752 command_runner.go:130] ! I0612 22:02:33.744449       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0612 15:03:47.217926   13752 command_runner.go:130] ! I0612 22:02:33.744531       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0612 15:03:47.217926   13752 command_runner.go:130] ! I0612 22:02:33.744546       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0612 15:03:47.217986   13752 command_runner.go:130] ! E0612 22:02:33.807267       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:33.807295       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:33.856639       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:33.857088       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:33.857273       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:33.894016       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:33.896048       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:33.896083       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:33.950707       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:33.950731       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:33.950771       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:33.950821       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:33.950870       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:33.995005       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:33.995247       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:44.062766       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:44.063067       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:44.063362       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:44.063411       1 shared_informer.go:313] Waiting for caches to sync for node
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:44.068203       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:44.068603       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:44.068777       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:44.071309       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:44.071638       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:44.071795       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:44.080804       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0612 15:03:47.217986   13752 command_runner.go:130] ! I0612 22:02:44.097810       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:47.218552   13752 command_runner.go:130] ! I0612 22:02:44.100018       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0612 15:03:47.218552   13752 command_runner.go:130] ! I0612 22:02:44.100030       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0612 15:03:47.218552   13752 command_runner.go:130] ! I0612 22:02:44.102193       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000\" does not exist"
	I0612 15:03:47.218552   13752 command_runner.go:130] ! I0612 22:02:44.102337       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m02\" does not exist"
	I0612 15:03:47.218552   13752 command_runner.go:130] ! I0612 22:02:44.102640       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:47.218552   13752 command_runner.go:130] ! I0612 22:02:44.102796       1 shared_informer.go:320] Caches are synced for TTL
	I0612 15:03:47.218740   13752 command_runner.go:130] ! I0612 22:02:44.102925       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m03\" does not exist"
	I0612 15:03:47.218740   13752 command_runner.go:130] ! I0612 22:02:44.102986       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:47.218740   13752 command_runner.go:130] ! I0612 22:02:44.113771       1 shared_informer.go:320] Caches are synced for GC
	I0612 15:03:47.218740   13752 command_runner.go:130] ! I0612 22:02:44.115010       1 shared_informer.go:320] Caches are synced for endpoint
	I0612 15:03:47.218823   13752 command_runner.go:130] ! I0612 22:02:44.115463       1 shared_informer.go:320] Caches are synced for cronjob
	I0612 15:03:47.218845   13752 command_runner.go:130] ! I0612 22:02:44.119062       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0612 15:03:47.218845   13752 command_runner.go:130] ! I0612 22:02:44.121259       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0612 15:03:47.218845   13752 command_runner.go:130] ! I0612 22:02:44.124526       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0612 15:03:47.218845   13752 command_runner.go:130] ! I0612 22:02:44.124650       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0612 15:03:47.218930   13752 command_runner.go:130] ! I0612 22:02:44.124971       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0612 15:03:47.218930   13752 command_runner.go:130] ! I0612 22:02:44.126246       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0612 15:03:47.218930   13752 command_runner.go:130] ! I0612 22:02:44.133682       1 shared_informer.go:320] Caches are synced for taint
	I0612 15:03:47.218930   13752 command_runner.go:130] ! I0612 22:02:44.134026       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.141044       1 shared_informer.go:320] Caches are synced for service account
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.145563       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.158513       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.162319       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000"
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.162613       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000-m02"
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.162653       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000-m03"
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.163186       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.164074       1 shared_informer.go:320] Caches are synced for node
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.164451       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.164672       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.164769       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.164780       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.167842       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.174384       1 shared_informer.go:320] Caches are synced for daemon sets
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.182521       1 shared_informer.go:320] Caches are synced for namespace
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.186460       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.194992       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.196327       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.196530       1 shared_informer.go:320] Caches are synced for job
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.196665       1 shared_informer.go:320] Caches are synced for deployment
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.200768       1 shared_informer.go:320] Caches are synced for HPA
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.200988       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.201846       1 shared_informer.go:320] Caches are synced for PV protection
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.207493       1 shared_informer.go:320] Caches are synced for crt configmap
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.228051       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="25.792655ms"
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.231633       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="89.306µs"
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.244808       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.644732ms"
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.246402       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.002µs"
	I0612 15:03:47.218988   13752 command_runner.go:130] ! I0612 22:02:44.297636       1 shared_informer.go:320] Caches are synced for PVC protection
	I0612 15:03:47.219513   13752 command_runner.go:130] ! I0612 22:02:44.304265       1 shared_informer.go:320] Caches are synced for stateful set
	I0612 15:03:47.219513   13752 command_runner.go:130] ! I0612 22:02:44.304486       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0612 15:03:47.219513   13752 command_runner.go:130] ! I0612 22:02:44.311023       1 shared_informer.go:320] Caches are synced for disruption
	I0612 15:03:47.219513   13752 command_runner.go:130] ! I0612 22:02:44.350865       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 15:03:47.219513   13752 command_runner.go:130] ! I0612 22:02:44.351039       1 shared_informer.go:320] Caches are synced for ephemeral
	I0612 15:03:47.219513   13752 command_runner.go:130] ! I0612 22:02:44.353535       1 shared_informer.go:320] Caches are synced for persistent volume
	I0612 15:03:47.219513   13752 command_runner.go:130] ! I0612 22:02:44.369296       1 shared_informer.go:320] Caches are synced for attach detach
	I0612 15:03:47.219513   13752 command_runner.go:130] ! I0612 22:02:44.372273       1 shared_informer.go:320] Caches are synced for expand
	I0612 15:03:47.219513   13752 command_runner.go:130] ! I0612 22:02:44.381442       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 15:03:47.219513   13752 command_runner.go:130] ! I0612 22:02:44.821842       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 15:03:47.219679   13752 command_runner.go:130] ! I0612 22:02:44.870923       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 15:03:47.219679   13752 command_runner.go:130] ! I0612 22:02:44.871005       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0612 15:03:47.219679   13752 command_runner.go:130] ! I0612 22:03:11.878868       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:47.219679   13752 command_runner.go:130] ! I0612 22:03:24.254264       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.921834ms"
	I0612 15:03:47.219782   13752 command_runner.go:130] ! I0612 22:03:24.256639       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.601µs"
	I0612 15:03:47.219782   13752 command_runner.go:130] ! I0612 22:03:37.832133       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="82.001µs"
	I0612 15:03:47.219837   13752 command_runner.go:130] ! I0612 22:03:37.905221       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.518825ms"
	I0612 15:03:47.219861   13752 command_runner.go:130] ! I0612 22:03:37.905853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.201µs"
	I0612 15:03:47.219890   13752 command_runner.go:130] ! I0612 22:03:37.917312       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.821108ms"
	I0612 15:03:47.219890   13752 command_runner.go:130] ! I0612 22:03:37.917472       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.3µs"
	I0612 15:03:47.236954   13752 logs.go:123] Gathering logs for kindnet [cccfd1e9fef5] ...
	I0612 15:03:47.236954   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccfd1e9fef5"
	I0612 15:03:47.261820   13752 command_runner.go:130] ! I0612 22:02:33.621070       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0612 15:03:47.261820   13752 command_runner.go:130] ! I0612 22:02:33.621857       1 main.go:107] hostIP = 172.23.200.184
	I0612 15:03:47.261820   13752 command_runner.go:130] ! podIP = 172.23.200.184
	I0612 15:03:47.264866   13752 command_runner.go:130] ! I0612 22:02:33.622055       1 main.go:116] setting mtu 1500 for CNI 
	I0612 15:03:47.264866   13752 command_runner.go:130] ! I0612 22:02:33.622069       1 main.go:146] kindnetd IP family: "ipv4"
	I0612 15:03:47.264866   13752 command_runner.go:130] ! I0612 22:02:33.622082       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0612 15:03:47.264866   13752 command_runner.go:130] ! I0612 22:03:03.928722       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0612 15:03:47.264866   13752 command_runner.go:130] ! I0612 22:03:03.948068       1 main.go:223] Handling node with IPs: map[172.23.200.184:{}]
	I0612 15:03:47.264866   13752 command_runner.go:130] ! I0612 22:03:03.948207       1 main.go:227] handling current node
	I0612 15:03:47.264866   13752 command_runner.go:130] ! I0612 22:03:04.015006       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.264866   13752 command_runner.go:130] ! I0612 22:03:04.015280       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.264866   13752 command_runner.go:130] ! I0612 22:03:04.015617       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.23.196.105 Flags: [] Table: 0} 
	I0612 15:03:47.264866   13752 command_runner.go:130] ! I0612 22:03:04.015960       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:47.264866   13752 command_runner.go:130] ! I0612 22:03:04.015976       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:47.265406   13752 command_runner.go:130] ! I0612 22:03:04.016053       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.23.206.72 Flags: [] Table: 0} 
	I0612 15:03:47.265406   13752 command_runner.go:130] ! I0612 22:03:14.032118       1 main.go:223] Handling node with IPs: map[172.23.200.184:{}]
	I0612 15:03:47.265406   13752 command_runner.go:130] ! I0612 22:03:14.032228       1 main.go:227] handling current node
	I0612 15:03:47.265406   13752 command_runner.go:130] ! I0612 22:03:14.032243       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.265406   13752 command_runner.go:130] ! I0612 22:03:14.032255       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.265406   13752 command_runner.go:130] ! I0612 22:03:14.032739       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:47.265406   13752 command_runner.go:130] ! I0612 22:03:14.032836       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:47.265406   13752 command_runner.go:130] ! I0612 22:03:24.045393       1 main.go:223] Handling node with IPs: map[172.23.200.184:{}]
	I0612 15:03:47.265557   13752 command_runner.go:130] ! I0612 22:03:24.045492       1 main.go:227] handling current node
	I0612 15:03:47.265603   13752 command_runner.go:130] ! I0612 22:03:24.045504       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.265603   13752 command_runner.go:130] ! I0612 22:03:24.045510       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.265603   13752 command_runner.go:130] ! I0612 22:03:24.045926       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:47.265603   13752 command_runner.go:130] ! I0612 22:03:24.045941       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:47.265603   13752 command_runner.go:130] ! I0612 22:03:34.052186       1 main.go:223] Handling node with IPs: map[172.23.200.184:{}]
	I0612 15:03:47.265603   13752 command_runner.go:130] ! I0612 22:03:34.052288       1 main.go:227] handling current node
	I0612 15:03:47.265690   13752 command_runner.go:130] ! I0612 22:03:34.052302       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.265690   13752 command_runner.go:130] ! I0612 22:03:34.052309       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.265690   13752 command_runner.go:130] ! I0612 22:03:34.052423       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:47.265690   13752 command_runner.go:130] ! I0612 22:03:34.052452       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:47.265750   13752 command_runner.go:130] ! I0612 22:03:44.068019       1 main.go:223] Handling node with IPs: map[172.23.200.184:{}]
	I0612 15:03:47.265750   13752 command_runner.go:130] ! I0612 22:03:44.068061       1 main.go:227] handling current node
	I0612 15:03:47.265802   13752 command_runner.go:130] ! I0612 22:03:44.068088       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 15:03:47.265802   13752 command_runner.go:130] ! I0612 22:03:44.068096       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 15:03:47.265802   13752 command_runner.go:130] ! I0612 22:03:44.068651       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 15:03:47.265802   13752 command_runner.go:130] ! I0612 22:03:44.068721       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 15:03:47.269354   13752 logs.go:123] Gathering logs for Docker ...
	I0612 15:03:47.269354   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0612 15:03:47.301252   13752 command_runner.go:130] > Jun 12 22:00:59 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:00:59 minikube cri-dockerd[222]: time="2024-06-12T22:00:59Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:00:59 minikube cri-dockerd[222]: time="2024-06-12T22:00:59Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:00:59 minikube cri-dockerd[222]: time="2024-06-12T22:00:59Z" level=info msg="Start docker client with request timeout 0s"
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:00:59 minikube cri-dockerd[222]: time="2024-06-12T22:00:59Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:00 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:00 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:00 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:02 minikube cri-dockerd[400]: time="2024-06-12T22:01:02Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:02 minikube cri-dockerd[400]: time="2024-06-12T22:01:02Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:02 minikube cri-dockerd[400]: time="2024-06-12T22:01:02Z" level=info msg="Start docker client with request timeout 0s"
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:02 minikube cri-dockerd[400]: time="2024-06-12T22:01:02Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:02 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:04 minikube cri-dockerd[420]: time="2024-06-12T22:01:04Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:04 minikube cri-dockerd[420]: time="2024-06-12T22:01:04Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:04 minikube cri-dockerd[420]: time="2024-06-12T22:01:04Z" level=info msg="Start docker client with request timeout 0s"
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:04 minikube cri-dockerd[420]: time="2024-06-12T22:01:04Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:04 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0612 15:03:47.301311   13752 command_runner.go:130] > Jun 12 22:01:07 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0612 15:03:47.301894   13752 command_runner.go:130] > Jun 12 22:01:07 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0612 15:03:47.301894   13752 command_runner.go:130] > Jun 12 22:01:07 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0612 15:03:47.301894   13752 command_runner.go:130] > Jun 12 22:01:07 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0612 15:03:47.301894   13752 command_runner.go:130] > Jun 12 22:01:07 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0612 15:03:47.301894   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 systemd[1]: Starting Docker Application Container Engine...
	I0612 15:03:47.302005   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[647]: time="2024-06-12T22:01:50.903212301Z" level=info msg="Starting up"
	I0612 15:03:47.302005   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[647]: time="2024-06-12T22:01:50.904075211Z" level=info msg="containerd not running, starting managed containerd"
	I0612 15:03:47.302048   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[647]: time="2024-06-12T22:01:50.905013523Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=653
	I0612 15:03:47.302077   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.936715611Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0612 15:03:47.302077   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.960715605Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0612 15:03:47.302077   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.960765806Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0612 15:03:47.302157   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.960836707Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0612 15:03:47.302157   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.961045509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.961654317Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.961681417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.961916220Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.962126123Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.962152723Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.962167223Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.962695730Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.963400938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.966083771Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.966199872Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.966330074Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.966461076Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.967039883Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.967257385Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.967282486Z" level=info msg="metadata content store policy set" policy=shared
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974400773Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974631276Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974732277Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974755077Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974771478Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.974844078Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975137982Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975475986Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0612 15:03:47.302210   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975634588Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0612 15:03:47.302737   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975657088Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0612 15:03:47.302737   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975672789Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0612 15:03:47.302737   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975691989Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0612 15:03:47.302737   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975721989Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0612 15:03:47.302737   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975744389Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0612 15:03:47.302737   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975762790Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0612 15:03:47.302892   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975776490Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0612 15:03:47.302892   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975789190Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0612 15:03:47.302892   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975800790Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0612 15:03:47.302987   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975819990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.302987   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975835091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.303048   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975847091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.303072   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975859491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.303101   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975870791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.303141   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975883291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.303141   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975894491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.303141   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975906891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.303211   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975920192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.303236   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975935492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975947192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975958792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975971092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.975989492Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976009893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976030193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976044093Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976167595Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976210595Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976227295Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976239996Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976250696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976263096Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976273096Z" level=info msg="NRI interface is disabled by configuration."
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976489199Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976766002Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976819403Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:50 multinode-025000 dockerd[653]: time="2024-06-12T22:01:50.976839003Z" level=info msg="containerd successfully booted in 0.042772s"
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:51 multinode-025000 dockerd[647]: time="2024-06-12T22:01:51.958896661Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.175284022Z" level=info msg="Loading containers: start."
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.600253538Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.679773678Z" level=info msg="Loading containers: done."
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.711890198Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.712661408Z" level=info msg="Daemon has completed initialization"
	I0612 15:03:47.303260   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.774658419Z" level=info msg="API listen on /var/run/docker.sock"
	I0612 15:03:47.303793   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 dockerd[647]: time="2024-06-12T22:01:52.774960723Z" level=info msg="API listen on [::]:2376"
	I0612 15:03:47.303793   13752 command_runner.go:130] > Jun 12 22:01:52 multinode-025000 systemd[1]: Started Docker Application Container Engine.
	I0612 15:03:47.303793   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 dockerd[647]: time="2024-06-12T22:02:17.292813222Z" level=info msg="Processing signal 'terminated'"
	I0612 15:03:47.303793   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 systemd[1]: Stopping Docker Application Container Engine...
	I0612 15:03:47.303793   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 dockerd[647]: time="2024-06-12T22:02:17.294859626Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0612 15:03:47.303793   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 dockerd[647]: time="2024-06-12T22:02:17.295213927Z" level=info msg="Daemon shutdown complete"
	I0612 15:03:47.303949   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 dockerd[647]: time="2024-06-12T22:02:17.295258527Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0612 15:03:47.303974   13752 command_runner.go:130] > Jun 12 22:02:17 multinode-025000 dockerd[647]: time="2024-06-12T22:02:17.295281927Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0612 15:03:47.303974   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 systemd[1]: docker.service: Deactivated successfully.
	I0612 15:03:47.303974   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 systemd[1]: Stopped Docker Application Container Engine.
	I0612 15:03:47.304018   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 systemd[1]: Starting Docker Application Container Engine...
	I0612 15:03:47.304018   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:18.376333019Z" level=info msg="Starting up"
	I0612 15:03:47.304047   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:18.377520222Z" level=info msg="containerd not running, starting managed containerd"
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:18.378639425Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1050
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.412854304Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437361860Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437471260Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437558660Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437600861Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437638361Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437674061Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.437957561Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.438006462Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.438028962Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.438041362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.438072362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.438209862Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441166869Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441307169Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441467569Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441599370Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441629870Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441648170Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.441660470Z" level=info msg="metadata content store policy set" policy=shared
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442075271Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442166571Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0612 15:03:47.304105   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442187871Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0612 15:03:47.304630   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442201971Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0612 15:03:47.304630   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442217371Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0612 15:03:47.304630   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442266071Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0612 15:03:47.304630   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442474372Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0612 15:03:47.304630   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442551072Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0612 15:03:47.304630   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442567272Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0612 15:03:47.304630   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442579372Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0612 15:03:47.304630   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442592672Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0612 15:03:47.304843   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442605072Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0612 15:03:47.304843   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442627672Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0612 15:03:47.304843   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442645772Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0612 15:03:47.304910   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442660172Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0612 15:03:47.304935   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442671872Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0612 15:03:47.304964   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442683572Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0612 15:03:47.305003   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442694372Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0612 15:03:47.305084   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442714572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.305084   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442727972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.305113   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442739972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.305113   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442754772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.305113   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442766572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.305174   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442778073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.305220   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442788873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.305220   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442800473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.305220   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442812673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442826373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442837973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442849073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442860373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442875173Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442974073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.442994973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443006773Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443066573Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443088973Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443100473Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443113173Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443144073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443156573Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443166273Z" level=info msg="NRI interface is disabled by configuration."
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443418874Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443494174Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0612 15:03:47.305292   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443534574Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0612 15:03:47.305822   13752 command_runner.go:130] > Jun 12 22:02:18 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:18.443571274Z" level=info msg="containerd successfully booted in 0.033238s"
	I0612 15:03:47.305822   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.419757425Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0612 15:03:47.305822   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.449018892Z" level=info msg="Loading containers: start."
	I0612 15:03:47.305822   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.739331061Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0612 15:03:47.305822   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.815989438Z" level=info msg="Loading containers: done."
	I0612 15:03:47.305947   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.842536299Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0612 15:03:47.305947   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.842674899Z" level=info msg="Daemon has completed initialization"
	I0612 15:03:47.305947   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.885012997Z" level=info msg="API listen on /var/run/docker.sock"
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 dockerd[1044]: time="2024-06-12T22:02:19.885608398Z" level=info msg="API listen on [::]:2376"
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:19 multinode-025000 systemd[1]: Started Docker Application Container Engine.
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Start docker client with request timeout 0s"
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Loaded network plugin cni"
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:20Z" level=info msg="Start cri-dockerd grpc backend"
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:20 multinode-025000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:25Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-vgcxw_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"894c58e9fe752e78b8e86cbbaabc1b6cc78ebcce37e4fc0bf1d838420f80a94d\""
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:25Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-45qqd_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"84a9b747663ca262bb35bb462ba83da0c104aee08928bd92a44297ee225d4c27\""
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.449365529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.449468129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.449499429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.449616229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.464315863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.464397563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.464444563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.464765264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.578440826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.581064832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.306026   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.582145135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.306585   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.582532135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.306585   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.617373216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.306585   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.617486816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.306585   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.617504016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.306585   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:26.617593816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.306585   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/da184577f0371664d0a472b38bbfcfd866178308bf69eaabdaefb47d30a7057a/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:47.306743   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a228f6c30fdf44f53a40ac14a2a8b995155f743739957ac413c700924fc873ed/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:47.306743   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/20cbfb3fb853177b89366d165b6a1f67628b2c429266b77034ee6d1ca68b7bac/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:47.306743   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/76517193a960ab9d78db3449c72d4b8285bbf321f947b06f8088487d36423fd7/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:47.306840   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.094370315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.306840   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.094456516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.306898   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.094499716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.306925   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.094865116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.306925   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.162934973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.306925   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.163009674Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.306993   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.163029074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.306993   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.163177074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.306993   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.167659984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.307080   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.170028290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.307108   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.170289390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.307145   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.171053192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.307145   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.233482736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.307145   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.233861237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.307215   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.234167138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.307238   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:27.234578639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.307267   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:31Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0612 15:03:47.307302   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.197280978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.307302   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.198144480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.307302   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.198158780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.307369   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.198341381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.307393   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.213822116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.307421   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.213977717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.307455   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.214060117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.307455   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.214298317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.307455   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.234135963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.307455   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.234182263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.307607   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.234192563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.307692   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.234264863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/435c56b0fbbbb46e4b392ac6467c2054ce16271a6b3dad2d53f747f839b4b3cd/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5287b61207e62a3ec16408b08af503462a8bed945d441422fd0b733e752d6217/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.564394224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.564548725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.564602325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.565056126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.630517377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.630663477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.630850678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.635052387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:02:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a20975d81b350d77bb2d9d69d861d19ddbcbab33211643f61e2aaa0d6dc46a9d/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.972834166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.973545267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.974028469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 dockerd[1050]: time="2024-06-12T22:02:32.974235669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 dockerd[1044]: time="2024-06-12T22:03:03.121297409Z" level=info msg="ignoring event" container=3546a5c00321078fed32a806a318f4e56e89801ea54ea9463adf37f82327b38a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:03.122616734Z" level=info msg="shim disconnected" id=3546a5c00321078fed32a806a318f4e56e89801ea54ea9463adf37f82327b38a namespace=moby
	I0612 15:03:47.307746   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:03.123474651Z" level=warning msg="cleaning up after shim disconnected" id=3546a5c00321078fed32a806a318f4e56e89801ea54ea9463adf37f82327b38a namespace=moby
	I0612 15:03:47.308291   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:03.123682355Z" level=info msg="cleaning up dead shim" namespace=moby
	I0612 15:03:47.308291   13752 command_runner.go:130] > Jun 12 22:03:13 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:13.819634342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.308291   13752 command_runner.go:130] > Jun 12 22:03:13 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:13.819751243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.308402   13752 command_runner.go:130] > Jun 12 22:03:13 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:13.819788644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.308402   13752 command_runner.go:130] > Jun 12 22:03:13 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:13.820654753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.308402   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.004015440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.308402   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.004176540Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.308527   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.004193540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.308566   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.005298945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.308602   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.006561551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.308602   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.006633551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.308643   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.006681251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.308698   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.006796752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.308698   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:03:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/986567ef57643aec05ae5353795c364b380cb0f13c2ba98b1c4e04897e7b2e46/resolv.conf as [nameserver 172.23.192.1]"
	I0612 15:03:47.308698   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:03:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2434f89aefe0079002e81e136580c67ef1dca28bfa3b4c1e950241aea9663d4a/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0612 15:03:47.308698   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.542434894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.308698   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.542705495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.308698   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.542742195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.308698   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.543238997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.308698   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.606926167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0612 15:03:47.308698   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.606994167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0612 15:03:47.308698   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.607017268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.308698   13752 command_runner.go:130] > Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.607410069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0612 15:03:47.328874   13752 logs.go:123] Gathering logs for kubelet ...
	I0612 15:03:47.328874   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0612 15:03:47.366324   13752 command_runner.go:130] > Jun 12 22:02:21 multinode-025000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0612 15:03:47.366324   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1381]: I0612 22:02:22.063456    1381 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0612 15:03:47.366324   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1381]: I0612 22:02:22.064093    1381 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:47.366324   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1381]: I0612 22:02:22.064387    1381 server.go:927] "Client rotation is on, will bootstrap in background"
	I0612 15:03:47.366324   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1381]: E0612 22:02:22.065868    1381 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0612 15:03:47.366324   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0612 15:03:47.366324   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0612 15:03:47.366324   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0612 15:03:47.366324   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0612 15:03:47.366324   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0612 15:03:47.366324   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1437]: I0612 22:02:22.789327    1437 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0612 15:03:47.366324   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1437]: I0612 22:02:22.789465    1437 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:47.366324   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1437]: I0612 22:02:22.790480    1437 server.go:927] "Client rotation is on, will bootstrap in background"
	I0612 15:03:47.366324   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 kubelet[1437]: E0612 22:02:22.790564    1437 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0612 15:03:47.366852   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0612 15:03:47.366852   13752 command_runner.go:130] > Jun 12 22:02:22 multinode-025000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0612 15:03:47.366938   13752 command_runner.go:130] > Jun 12 22:02:23 multinode-025000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0612 15:03:47.366938   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.414046    1517 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.414147    1517 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.414632    1517 server.go:927] "Client rotation is on, will bootstrap in background"
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.416608    1517 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.437750    1517 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.458497    1517 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.458849    1517 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.460038    1517 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.460095    1517 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-025000","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.464057    1517 topology_manager.go:138] "Creating topology manager with none policy"
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.464080    1517 container_manager_linux.go:301] "Creating device plugin manager"
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.464924    1517 state_mem.go:36] "Initialized new in-memory state store"
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.466519    1517 kubelet.go:400] "Attempting to sync node with API server"
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.466546    1517 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.466613    1517 kubelet.go:312] "Adding apiserver pod source"
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.467352    1517 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: W0612 22:02:25.471384    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-025000&limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.471502    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-025000&limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.471869    1517 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.1.4" apiVersion="v1"
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.477415    1517 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: W0612 22:02:25.478424    1517 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.480523    1517 server.go:1264] "Started kubelet"
	I0612 15:03:47.366992   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: W0612 22:02:25.481568    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:47.367517   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.481666    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:47.367592   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.481865    1517 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0612 15:03:47.367592   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.482789    1517 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0612 15:03:47.367592   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.485497    1517 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0612 15:03:47.367728   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.490040    1517 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.23.200.184:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-025000.17d860d995e00c7b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-025000,UID:multinode-025000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-025000,},FirstTimestamp:2024-06-12 22:02:25.480502395 +0000 UTC m=+0.149388345,LastTimestamp:2024-06-12 22:02:25.480502395 +0000 UTC m=+0.149388345,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-0
25000,}"
	I0612 15:03:47.367763   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.493219    1517 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0612 15:03:47.367763   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.495119    1517 server.go:455] "Adding debug handlers to kubelet server"
	I0612 15:03:47.367800   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.496095    1517 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0612 15:03:47.367836   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.498560    1517 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0612 15:03:47.367888   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.501388    1517 factory.go:221] Registration of the systemd container factory successfully
	I0612 15:03:47.367933   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.501556    1517 factory.go:219] Registration of the crio container factory failed: Get "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)crio%!F(MISSING)crio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0612 15:03:47.367933   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.501657    1517 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0612 15:03:47.368013   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: W0612 22:02:25.510641    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:47.368061   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.510706    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.521028    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-025000?timeout=10s\": dial tcp 172.23.200.184:8443: connect: connection refused" interval="200ms"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.554579    1517 reconciler.go:26] "Reconciler: start to sync state"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.594809    1517 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.595077    1517 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.595178    1517 state_mem.go:36] "Initialized new in-memory state store"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.598081    1517 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.598418    1517 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.598595    1517 policy_none.go:49] "None policy: Start"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.600760    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-025000"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.602144    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.200.184:8443: connect: connection refused" node="multinode-025000"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.610755    1517 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.610783    1517 state_mem.go:35] "Initializing new in-memory state store"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.610843    1517 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.611758    1517 state_mem.go:75] "Updated machine memory state"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.613995    1517 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.614216    1517 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.615027    1517 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.615636    1517 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.615685    1517 kubelet.go:2337] "Starting kubelet main sync loop"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.615730    1517 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.616221    1517 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: W0612 22:02:25.632621    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.632711    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.634150    1517 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-025000\" not found"
	I0612 15:03:47.368088   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.644874    1517 iptables.go:577] "Could not set up iptables canary" err=<
	I0612 15:03:47.368611   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0612 15:03:47.368669   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0612 15:03:47.368669   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0612 15:03:47.368755   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.717070    1517 topology_manager.go:215] "Topology Admit Handler" podUID="d6071cd4356268889f798790dc93ce06" podNamespace="kube-system" podName="kube-apiserver-multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.719714    1517 topology_manager.go:215] "Topology Admit Handler" podUID="88de11d8b1aaec126153d44e87c4b5dd" podNamespace="kube-system" podName="kube-controller-manager-multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.720740    1517 topology_manager.go:215] "Topology Admit Handler" podUID="de62e7fd7d0feea82620e745032c1a67" podNamespace="kube-system" podName="kube-scheduler-multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.722295    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-025000?timeout=10s\": dial tcp 172.23.200.184:8443: connect: connection refused" interval="400ms"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.724629    1517 topology_manager.go:215] "Topology Admit Handler" podUID="7b6b5637642f3d915c0db1461c7074e6" podNamespace="kube-system" podName="etcd-multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.725657    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fad98f611536b15941d0f49c694b6b6c39318bca8a66620735a88a81a12d3610"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.725708    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb4351fab502e49592d49234119b810b53c5916eaf732d4ba148b3ad1eed4e6a"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.725720    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b9e051df48486e732da2c72bf2d0e3ec93cf8774632ecedd8825e656ba04a93"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.725728    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2784305b1d5e9a088f0b73ff004b2d9eca305d397de3d7b9912638323d7c66b2"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.725737    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40443305b24f54fea9235d98bfb16f2d550b8914bfa46c0592b5c24be1ad5569"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.736677    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9933fdc9ca72b65b57e5b4b996215763431b87f18af45fdc8195252497e1d9a"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.760928    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="894c58e9fe752e78b8e86cbbaabc1b6cc78ebcce37e4fc0bf1d838420f80a94d"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.777475    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84a9b747663ca262bb35bb462ba83da0c104aee08928bd92a44297ee225d4c27"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.794474    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92f2d5f19e95ea2d1cfe140159a55c94f5d809c3b67661196b1e285ac389537f"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.803790    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: E0612 22:02:25.804820    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.200.184:8443: connect: connection refused" node="multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885533    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/88de11d8b1aaec126153d44e87c4b5dd-ca-certs\") pod \"kube-controller-manager-multinode-025000\" (UID: \"88de11d8b1aaec126153d44e87c4b5dd\") " pod="kube-system/kube-controller-manager-multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885705    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d6071cd4356268889f798790dc93ce06-ca-certs\") pod \"kube-apiserver-multinode-025000\" (UID: \"d6071cd4356268889f798790dc93ce06\") " pod="kube-system/kube-apiserver-multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885746    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d6071cd4356268889f798790dc93ce06-k8s-certs\") pod \"kube-apiserver-multinode-025000\" (UID: \"d6071cd4356268889f798790dc93ce06\") " pod="kube-system/kube-apiserver-multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885768    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/88de11d8b1aaec126153d44e87c4b5dd-k8s-certs\") pod \"kube-controller-manager-multinode-025000\" (UID: \"88de11d8b1aaec126153d44e87c4b5dd\") " pod="kube-system/kube-controller-manager-multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885803    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/88de11d8b1aaec126153d44e87c4b5dd-kubeconfig\") pod \"kube-controller-manager-multinode-025000\" (UID: \"88de11d8b1aaec126153d44e87c4b5dd\") " pod="kube-system/kube-controller-manager-multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885844    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/88de11d8b1aaec126153d44e87c4b5dd-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-025000\" (UID: \"88de11d8b1aaec126153d44e87c4b5dd\") " pod="kube-system/kube-controller-manager-multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885869    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/de62e7fd7d0feea82620e745032c1a67-kubeconfig\") pod \"kube-scheduler-multinode-025000\" (UID: \"de62e7fd7d0feea82620e745032c1a67\") " pod="kube-system/kube-scheduler-multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885941    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/7b6b5637642f3d915c0db1461c7074e6-etcd-certs\") pod \"etcd-multinode-025000\" (UID: \"7b6b5637642f3d915c0db1461c7074e6\") " pod="kube-system/etcd-multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885970    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/7b6b5637642f3d915c0db1461c7074e6-etcd-data\") pod \"etcd-multinode-025000\" (UID: \"7b6b5637642f3d915c0db1461c7074e6\") " pod="kube-system/etcd-multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.885997    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d6071cd4356268889f798790dc93ce06-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-025000\" (UID: \"d6071cd4356268889f798790dc93ce06\") " pod="kube-system/kube-apiserver-multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:25 multinode-025000 kubelet[1517]: I0612 22:02:25.886023    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/88de11d8b1aaec126153d44e87c4b5dd-flexvolume-dir\") pod \"kube-controller-manager-multinode-025000\" (UID: \"88de11d8b1aaec126153d44e87c4b5dd\") " pod="kube-system/kube-controller-manager-multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.124157    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-025000?timeout=10s\": dial tcp 172.23.200.184:8443: connect: connection refused" interval="800ms"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: I0612 22:02:26.206204    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.207259    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.200.184:8443: connect: connection refused" node="multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: W0612 22:02:26.576346    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-025000&limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.576490    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-025000&limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: W0612 22:02:26.832319    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.832430    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: W0612 22:02:26.847085    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.847226    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: W0612 22:02:26.894179    1517 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.894251    1517 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.23.200.184:8443: connect: connection refused
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: I0612 22:02:26.910045    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76517193a960ab9d78db3449c72d4b8285bbf321f947b06f8088487d36423fd7"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.925848    1517 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-025000?timeout=10s\": dial tcp 172.23.200.184:8443: connect: connection refused" interval="1.6s"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:26 multinode-025000 kubelet[1517]: E0612 22:02:26.967442    1517 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.23.200.184:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-025000.17d860d995e00c7b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-025000,UID:multinode-025000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-025000,},FirstTimestamp:2024-06-12 22:02:25.480502395 +0000 UTC m=+0.149388345,LastTimestamp:2024-06-12 22:02:25.480502395 +0000 UTC m=+0.149388345,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-0
25000,}"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 kubelet[1517]: I0612 22:02:27.008640    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:27 multinode-025000 kubelet[1517]: E0612 22:02:27.009541    1517 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.23.200.184:8443: connect: connection refused" node="multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:28 multinode-025000 kubelet[1517]: I0612 22:02:28.611782    1517 kubelet_node_status.go:73] "Attempting to register node" node="multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.067503    1517 kubelet_node_status.go:112] "Node was previously registered" node="multinode-025000"
	I0612 15:03:47.368819   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.069193    1517 kubelet_node_status.go:76] "Successfully registered node" node="multinode-025000"
	I0612 15:03:47.370206   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.078543    1517 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0612 15:03:47.370235   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.083746    1517 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0612 15:03:47.370235   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.087512    1517 setters.go:580] "Node became not ready" node="multinode-025000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-06-12T22:02:31Z","lastTransitionTime":"2024-06-12T22:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0612 15:03:47.370235   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.485482    1517 apiserver.go:52] "Watching apiserver"
	I0612 15:03:47.370235   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.491838    1517 topology_manager.go:215] "Topology Admit Handler" podUID="1f004a05-3f5f-444b-9ac0-88f0e23da904" podNamespace="kube-system" podName="kindnet-bqlg8"
	I0612 15:03:47.370235   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.492246    1517 topology_manager.go:215] "Topology Admit Handler" podUID="10b24fa7-8eea-4fbb-ab18-404e853aa7ab" podNamespace="kube-system" podName="kube-proxy-47lr8"
	I0612 15:03:47.370401   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.493249    1517 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-025000" podUID="6b429685-b322-4b00-83fc-743786ff40e1"
	I0612 15:03:47.370463   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.494355    1517 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-025000" podUID="630bafc4-4576-4974-b638-7ab52dcfec18"
	I0612 15:03:47.370463   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.494642    1517 topology_manager.go:215] "Topology Admit Handler" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-vgcxw"
	I0612 15:03:47.370463   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.494763    1517 topology_manager.go:215] "Topology Admit Handler" podUID="d20f7489-1aa1-44b8-9221-4d1849884be4" podNamespace="kube-system" podName="storage-provisioner"
	I0612 15:03:47.370463   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.494876    1517 topology_manager.go:215] "Topology Admit Handler" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4" podNamespace="default" podName="busybox-fc5497c4f-45qqd"
	I0612 15:03:47.370463   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.495127    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.370463   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.495306    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.370463   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.499353    1517 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0612 15:03:47.371218   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.541672    1517 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-025000"
	I0612 15:03:47.371218   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.557538    1517 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-025000"
	I0612 15:03:47.371218   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593012    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1f004a05-3f5f-444b-9ac0-88f0e23da904-cni-cfg\") pod \"kindnet-bqlg8\" (UID: \"1f004a05-3f5f-444b-9ac0-88f0e23da904\") " pod="kube-system/kindnet-bqlg8"
	I0612 15:03:47.371218   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593075    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10b24fa7-8eea-4fbb-ab18-404e853aa7ab-lib-modules\") pod \"kube-proxy-47lr8\" (UID: \"10b24fa7-8eea-4fbb-ab18-404e853aa7ab\") " pod="kube-system/kube-proxy-47lr8"
	I0612 15:03:47.371218   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593188    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f004a05-3f5f-444b-9ac0-88f0e23da904-lib-modules\") pod \"kindnet-bqlg8\" (UID: \"1f004a05-3f5f-444b-9ac0-88f0e23da904\") " pod="kube-system/kindnet-bqlg8"
	I0612 15:03:47.371218   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593684    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d20f7489-1aa1-44b8-9221-4d1849884be4-tmp\") pod \"storage-provisioner\" (UID: \"d20f7489-1aa1-44b8-9221-4d1849884be4\") " pod="kube-system/storage-provisioner"
	I0612 15:03:47.371218   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593711    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f004a05-3f5f-444b-9ac0-88f0e23da904-xtables-lock\") pod \"kindnet-bqlg8\" (UID: \"1f004a05-3f5f-444b-9ac0-88f0e23da904\") " pod="kube-system/kindnet-bqlg8"
	I0612 15:03:47.371218   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.593752    1517 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10b24fa7-8eea-4fbb-ab18-404e853aa7ab-xtables-lock\") pod \"kube-proxy-47lr8\" (UID: \"10b24fa7-8eea-4fbb-ab18-404e853aa7ab\") " pod="kube-system/kube-proxy-47lr8"
	I0612 15:03:47.371218   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.594460    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:47.371218   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.594613    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:02:32.094549489 +0000 UTC m=+6.763435539 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:47.371218   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.622682    1517 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04dcbc8e258f964f689941b6844769d9" path="/var/lib/kubelet/pods/04dcbc8e258f964f689941b6844769d9/volumes"
	I0612 15:03:47.371218   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.623801    1517 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="610414aa8160848c0b6b79ea0a700b83" path="/var/lib/kubelet/pods/610414aa8160848c0b6b79ea0a700b83/volumes"
	I0612 15:03:47.371218   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.626972    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.371218   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.627014    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.371748   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: E0612 22:02:31.627132    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:02:32.127114564 +0000 UTC m=+6.796000614 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.371748   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.673848    1517 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-025000" podStartSLOduration=0.673800971 podStartE2EDuration="673.800971ms" podCreationTimestamp="2024-06-12 22:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-12 22:02:31.632162175 +0000 UTC m=+6.301048225" watchObservedRunningTime="2024-06-12 22:02:31.673800971 +0000 UTC m=+6.342686921"
	I0612 15:03:47.371908   13752 command_runner.go:130] > Jun 12 22:02:31 multinode-025000 kubelet[1517]: I0612 22:02:31.674234    1517 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-025000" podStartSLOduration=0.674226172 podStartE2EDuration="674.226172ms" podCreationTimestamp="2024-06-12 22:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-12 22:02:31.67337587 +0000 UTC m=+6.342261920" watchObservedRunningTime="2024-06-12 22:02:31.674226172 +0000 UTC m=+6.343112222"
	I0612 15:03:47.371924   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: E0612 22:02:32.099190    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:47.372002   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: E0612 22:02:32.099284    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:02:33.099266752 +0000 UTC m=+7.768152702 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:47.372002   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: E0612 22:02:32.199774    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.372104   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: E0612 22:02:32.199808    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.372131   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: E0612 22:02:32.199864    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:02:33.199845384 +0000 UTC m=+7.868731334 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.372131   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: I0612 22:02:32.394461    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5287b61207e62a3ec16408b08af503462a8bed945d441422fd0b733e752d6217"
	I0612 15:03:47.372131   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: I0612 22:02:32.774495    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a20975d81b350d77bb2d9d69d861d19ddbcbab33211643f61e2aaa0d6dc46a9d"
	I0612 15:03:47.372131   13752 command_runner.go:130] > Jun 12 22:02:32 multinode-025000 kubelet[1517]: I0612 22:02:32.791274    1517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="435c56b0fbbbb46e4b392ac6467c2054ce16271a6b3dad2d53f747f839b4b3cd"
	I0612 15:03:47.372131   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.106313    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:47.372131   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.106394    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:02:35.106375874 +0000 UTC m=+9.775261924 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:47.372131   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.208318    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.372131   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.208375    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.372131   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.208431    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:02:35.208413609 +0000 UTC m=+9.877299559 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.372131   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.617822    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.372131   13752 command_runner.go:130] > Jun 12 22:02:33 multinode-025000 kubelet[1517]: E0612 22:02:33.618103    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.372131   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.125562    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:47.372131   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.126376    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:02:39.12633293 +0000 UTC m=+13.795218980 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:47.372131   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.226548    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.372131   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.226607    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.372654   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.226693    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:02:39.226674161 +0000 UTC m=+13.895560111 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.372654   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.616712    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.372737   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.617047    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:47.372737   13752 command_runner.go:130] > Jun 12 22:02:35 multinode-025000 kubelet[1517]: E0612 22:02:35.617270    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.372737   13752 command_runner.go:130] > Jun 12 22:02:37 multinode-025000 kubelet[1517]: E0612 22:02:37.618147    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.372737   13752 command_runner.go:130] > Jun 12 22:02:37 multinode-025000 kubelet[1517]: E0612 22:02:37.618607    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.372737   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.164650    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:47.372737   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.164956    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:02:47.164935524 +0000 UTC m=+21.833821574 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:47.372737   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.265764    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.372737   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.266004    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.372737   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.266086    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:02:47.266062158 +0000 UTC m=+21.934948208 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.372737   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.616548    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.372737   13752 command_runner.go:130] > Jun 12 22:02:39 multinode-025000 kubelet[1517]: E0612 22:02:39.617577    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.372737   13752 command_runner.go:130] > Jun 12 22:02:40 multinode-025000 kubelet[1517]: E0612 22:02:40.619032    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:47.372737   13752 command_runner.go:130] > Jun 12 22:02:41 multinode-025000 kubelet[1517]: E0612 22:02:41.617010    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.372737   13752 command_runner.go:130] > Jun 12 22:02:41 multinode-025000 kubelet[1517]: E0612 22:02:41.617816    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.373271   13752 command_runner.go:130] > Jun 12 22:02:43 multinode-025000 kubelet[1517]: E0612 22:02:43.617105    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.373271   13752 command_runner.go:130] > Jun 12 22:02:43 multinode-025000 kubelet[1517]: E0612 22:02:43.617755    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.373271   13752 command_runner.go:130] > Jun 12 22:02:45 multinode-025000 kubelet[1517]: E0612 22:02:45.617112    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.373271   13752 command_runner.go:130] > Jun 12 22:02:45 multinode-025000 kubelet[1517]: E0612 22:02:45.618034    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.373271   13752 command_runner.go:130] > Jun 12 22:02:45 multinode-025000 kubelet[1517]: E0612 22:02:45.621402    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:47.373471   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.234271    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:47.373471   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.234420    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:03:03.234402815 +0000 UTC m=+37.903288765 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:47.373558   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.335532    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.373558   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.335632    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.373634   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.335696    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:03:03.33568009 +0000 UTC m=+38.004566140 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.373700   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.617048    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.373700   13752 command_runner.go:130] > Jun 12 22:02:47 multinode-025000 kubelet[1517]: E0612 22:02:47.617530    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.373784   13752 command_runner.go:130] > Jun 12 22:02:49 multinode-025000 kubelet[1517]: E0612 22:02:49.617040    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.373862   13752 command_runner.go:130] > Jun 12 22:02:49 multinode-025000 kubelet[1517]: E0612 22:02:49.617673    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.373887   13752 command_runner.go:130] > Jun 12 22:02:50 multinode-025000 kubelet[1517]: E0612 22:02:50.623368    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:47.373994   13752 command_runner.go:130] > Jun 12 22:02:51 multinode-025000 kubelet[1517]: E0612 22:02:51.616848    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.373994   13752 command_runner.go:130] > Jun 12 22:02:51 multinode-025000 kubelet[1517]: E0612 22:02:51.617656    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.373994   13752 command_runner.go:130] > Jun 12 22:02:53 multinode-025000 kubelet[1517]: E0612 22:02:53.617130    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.373994   13752 command_runner.go:130] > Jun 12 22:02:53 multinode-025000 kubelet[1517]: E0612 22:02:53.617679    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.373994   13752 command_runner.go:130] > Jun 12 22:02:55 multinode-025000 kubelet[1517]: E0612 22:02:55.617082    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.373994   13752 command_runner.go:130] > Jun 12 22:02:55 multinode-025000 kubelet[1517]: E0612 22:02:55.617595    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.373994   13752 command_runner.go:130] > Jun 12 22:02:55 multinode-025000 kubelet[1517]: E0612 22:02:55.624795    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:47.373994   13752 command_runner.go:130] > Jun 12 22:02:57 multinode-025000 kubelet[1517]: E0612 22:02:57.617430    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.373994   13752 command_runner.go:130] > Jun 12 22:02:57 multinode-025000 kubelet[1517]: E0612 22:02:57.618180    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.373994   13752 command_runner.go:130] > Jun 12 22:02:59 multinode-025000 kubelet[1517]: E0612 22:02:59.616577    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.373994   13752 command_runner.go:130] > Jun 12 22:02:59 multinode-025000 kubelet[1517]: E0612 22:02:59.617339    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.373994   13752 command_runner.go:130] > Jun 12 22:03:00 multinode-025000 kubelet[1517]: E0612 22:03:00.626741    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:47.373994   13752 command_runner.go:130] > Jun 12 22:03:01 multinode-025000 kubelet[1517]: E0612 22:03:01.617176    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.373994   13752 command_runner.go:130] > Jun 12 22:03:01 multinode-025000 kubelet[1517]: E0612 22:03:01.617573    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.373994   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: I0612 22:03:03.236005    1517 scope.go:117] "RemoveContainer" containerID="61910369e0d4ba1a5246a686e904c168fc7467d239e475004146ddf2835e8e78"
	I0612 15:03:47.373994   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: I0612 22:03:03.236962    1517 scope.go:117] "RemoveContainer" containerID="3546a5c00321078fed32a806a318f4e56e89801ea54ea9463adf37f82327b38a"
	I0612 15:03:47.374516   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.239739    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d20f7489-1aa1-44b8-9221-4d1849884be4)\"" pod="kube-system/storage-provisioner" podUID="d20f7489-1aa1-44b8-9221-4d1849884be4"
	I0612 15:03:47.374682   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.284341    1517 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.284420    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume podName:c5bd143a-d39e-46af-9308-0a97bb45729c nodeName:}" failed. No retries permitted until 2024-06-12 22:03:35.284401461 +0000 UTC m=+69.953287411 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5bd143a-d39e-46af-9308-0a97bb45729c-config-volume") pod "coredns-7db6d8ff4d-vgcxw" (UID: "c5bd143a-d39e-46af-9308-0a97bb45729c") : object "kube-system"/"coredns" not registered
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.385432    1517 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.385531    1517 projected.go:200] Error preparing data for projected volume kube-api-access-2w7zn for pod default/busybox-fc5497c4f-45qqd: object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.385613    1517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn podName:8736e2b2-a744-4092-ac73-c59700fda8a4 nodeName:}" failed. No retries permitted until 2024-06-12 22:03:35.385594617 +0000 UTC m=+70.054480667 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7zn" (UniqueName: "kubernetes.io/projected/8736e2b2-a744-4092-ac73-c59700fda8a4-kube-api-access-2w7zn") pod "busybox-fc5497c4f-45qqd" (UID: "8736e2b2-a744-4092-ac73-c59700fda8a4") : object "default"/"kube-root-ca.crt" not registered
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.616668    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:03 multinode-025000 kubelet[1517]: E0612 22:03:03.617100    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:05 multinode-025000 kubelet[1517]: E0612 22:03:05.617214    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:05 multinode-025000 kubelet[1517]: E0612 22:03:05.617674    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:05 multinode-025000 kubelet[1517]: E0612 22:03:05.628542    1517 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:07 multinode-025000 kubelet[1517]: E0612 22:03:07.616455    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:07 multinode-025000 kubelet[1517]: E0612 22:03:07.617581    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:09 multinode-025000 kubelet[1517]: E0612 22:03:09.617093    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:09 multinode-025000 kubelet[1517]: E0612 22:03:09.617405    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:13 multinode-025000 kubelet[1517]: I0612 22:03:13.617647    1517 scope.go:117] "RemoveContainer" containerID="3546a5c00321078fed32a806a318f4e56e89801ea54ea9463adf37f82327b38a"
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]: I0612 22:03:25.637114    1517 scope.go:117] "RemoveContainer" containerID="0749f44d03561395230c8a60a41853a49502741bf3bcd45acc924d346061f5b0"
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]: E0612 22:03:25.663119    1517 iptables.go:577] "Could not set up iptables canary" err=<
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0612 15:03:47.374717   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0612 15:03:47.375237   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0612 15:03:47.375237   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0612 15:03:47.375237   13752 command_runner.go:130] > Jun 12 22:03:25 multinode-025000 kubelet[1517]: I0612 22:03:25.699754    1517 scope.go:117] "RemoveContainer" containerID="2455f315465b9508a3fe1025d7150342eedb3cb09eb5f8fd9b2cbbffe1306db0"
	I0612 15:03:47.408558   13752 logs.go:123] Gathering logs for dmesg ...
	I0612 15:03:47.408558   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0612 15:03:47.438236   13752 command_runner.go:130] > [Jun12 22:00] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.131000] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.025099] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.064850] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.023448] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0612 15:03:47.438236   13752 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +5.508165] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +1.342262] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +1.269809] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +7.259362] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0612 15:03:47.438236   13752 command_runner.go:130] > [Jun12 22:01] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.155290] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	I0612 15:03:47.438236   13752 command_runner.go:130] > [Jun12 22:02] systemd-fstab-generator[971]: Ignoring "noauto" option for root device
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.095843] kauditd_printk_skb: 73 callbacks suppressed
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.507476] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.171390] systemd-fstab-generator[1022]: Ignoring "noauto" option for root device
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.210222] systemd-fstab-generator[1036]: Ignoring "noauto" option for root device
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +2.904531] systemd-fstab-generator[1224]: Ignoring "noauto" option for root device
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.189304] systemd-fstab-generator[1237]: Ignoring "noauto" option for root device
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.162041] systemd-fstab-generator[1248]: Ignoring "noauto" option for root device
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.261611] systemd-fstab-generator[1263]: Ignoring "noauto" option for root device
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.815328] systemd-fstab-generator[1374]: Ignoring "noauto" option for root device
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +0.096217] kauditd_printk_skb: 205 callbacks suppressed
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +3.646175] systemd-fstab-generator[1510]: Ignoring "noauto" option for root device
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +1.441935] kauditd_printk_skb: 54 callbacks suppressed
	I0612 15:03:47.438236   13752 command_runner.go:130] > [  +5.624550] kauditd_printk_skb: 20 callbacks suppressed
	I0612 15:03:47.439238   13752 command_runner.go:130] > [  +3.644538] systemd-fstab-generator[2322]: Ignoring "noauto" option for root device
	I0612 15:03:47.439238   13752 command_runner.go:130] > [  +8.250122] kauditd_printk_skb: 70 callbacks suppressed
	I0612 15:03:47.441204   13752 logs.go:123] Gathering logs for etcd [6b61f5f6483d] ...
	I0612 15:03:47.441204   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61f5f6483d"
	I0612 15:03:47.466449   13752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-12T22:02:27.594582Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0612 15:03:47.469533   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.595941Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.23.200.184:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.23.200.184:2380","--initial-cluster=multinode-025000=https://172.23.200.184:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.23.200.184:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.23.200.184:2380","--name=multinode-025000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0612 15:03:47.469582   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.596165Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0612 15:03:47.469637   13752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-12T22:02:27.596271Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0612 15:03:47.469684   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.596356Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.23.200.184:2380"]}
	I0612 15:03:47.469765   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.596492Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0612 15:03:47.469799   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.611167Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.23.200.184:2379"]}
	I0612 15:03:47.469851   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.613093Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-025000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.23.200.184:2380"],"listen-peer-urls":["https://172.23.200.184:2380"],"advertise-client-urls":["https://172.23.200.184:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.23.200.184:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"in
itial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0612 15:03:47.469851   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.643295Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"27.151363ms"}
	I0612 15:03:47.469986   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.674268Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0612 15:03:47.470033   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.702241Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"a7fa2563dcb4b7b8","local-member-id":"b93ef5bd064a9684","commit-index":2039}
	I0612 15:03:47.470033   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.702551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 switched to configuration voters=()"}
	I0612 15:03:47.470084   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.702585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 became follower at term 2"}
	I0612 15:03:47.470084   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.70261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft b93ef5bd064a9684 [peers: [], term: 2, commit: 2039, applied: 0, lastindex: 2039, lastterm: 2]"}
	I0612 15:03:47.470147   13752 command_runner.go:130] ! {"level":"warn","ts":"2024-06-12T22:02:27.719372Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0612 15:03:47.470203   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.724082Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1403}
	I0612 15:03:47.470233   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.735755Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1769}
	I0612 15:03:47.470233   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.743333Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0612 15:03:47.470233   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.753311Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"b93ef5bd064a9684","timeout":"7s"}
	I0612 15:03:47.470233   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.755587Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"b93ef5bd064a9684"}
	I0612 15:03:47.470233   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.755671Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"b93ef5bd064a9684","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0612 15:03:47.470233   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.758078Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0612 15:03:47.470233   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.758939Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0612 15:03:47.470233   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.759011Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0612 15:03:47.470233   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.759115Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0612 15:03:47.470760   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.759495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 switched to configuration voters=(13348376537775904388)"}
	I0612 15:03:47.470760   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.759589Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a7fa2563dcb4b7b8","local-member-id":"b93ef5bd064a9684","added-peer-id":"b93ef5bd064a9684","added-peer-peer-urls":["https://172.23.198.154:2380"]}
	I0612 15:03:47.470760   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.760197Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a7fa2563dcb4b7b8","local-member-id":"b93ef5bd064a9684","cluster-version":"3.5"}
	I0612 15:03:47.470858   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.761198Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0612 15:03:47.470858   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.764395Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0612 15:03:47.470996   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.765492Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b93ef5bd064a9684","initial-advertise-peer-urls":["https://172.23.200.184:2380"],"listen-peer-urls":["https://172.23.200.184:2380"],"advertise-client-urls":["https://172.23.200.184:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.23.200.184:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0612 15:03:47.471051   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.766195Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0612 15:03:47.471098   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.766744Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.23.200.184:2380"}
	I0612 15:03:47.471098   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:27.767384Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.23.200.184:2380"}
	I0612 15:03:47.471124   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503194Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 is starting a new election at term 2"}
	I0612 15:03:47.471124   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.50332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 became pre-candidate at term 2"}
	I0612 15:03:47.471124   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503351Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 received MsgPreVoteResp from b93ef5bd064a9684 at term 2"}
	I0612 15:03:47.471124   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 became candidate at term 3"}
	I0612 15:03:47.471124   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 received MsgVoteResp from b93ef5bd064a9684 at term 3"}
	I0612 15:03:47.471124   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 became leader at term 3"}
	I0612 15:03:47.471124   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.503481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b93ef5bd064a9684 elected leader b93ef5bd064a9684 at term 3"}
	I0612 15:03:47.471124   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.511068Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0612 15:03:47.471124   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.511381Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0612 15:03:47.471124   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.511069Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"b93ef5bd064a9684","local-member-attributes":"{Name:multinode-025000 ClientURLs:[https://172.23.200.184:2379]}","request-path":"/0/members/b93ef5bd064a9684/attributes","cluster-id":"a7fa2563dcb4b7b8","publish-timeout":"7s"}
	I0612 15:03:47.471124   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.512996Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0612 15:03:47.471124   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.513243Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0612 15:03:47.471124   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.514729Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0612 15:03:47.471124   13752 command_runner.go:130] ! {"level":"info","ts":"2024-06-12T22:02:29.515422Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.23.200.184:2379"}
	I0612 15:03:47.477975   13752 logs.go:123] Gathering logs for kube-scheduler [755750ecd1e3] ...
	I0612 15:03:47.477975   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 755750ecd1e3"
	I0612 15:03:47.502405   13752 command_runner.go:130] ! I0612 22:02:28.771072       1 serving.go:380] Generated self-signed cert in-memory
	I0612 15:03:47.504030   13752 command_runner.go:130] ! W0612 22:02:31.003959       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0612 15:03:47.504089   13752 command_runner.go:130] ! W0612 22:02:31.004072       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:47.504157   13752 command_runner.go:130] ! W0612 22:02:31.004087       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0612 15:03:47.504157   13752 command_runner.go:130] ! W0612 22:02:31.004098       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0612 15:03:47.504196   13752 command_runner.go:130] ! I0612 22:02:31.034273       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0612 15:03:47.504196   13752 command_runner.go:130] ! I0612 22:02:31.034440       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:47.504196   13752 command_runner.go:130] ! I0612 22:02:31.039288       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0612 15:03:47.504196   13752 command_runner.go:130] ! I0612 22:02:31.039331       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0612 15:03:47.504196   13752 command_runner.go:130] ! I0612 22:02:31.039699       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0612 15:03:47.504196   13752 command_runner.go:130] ! I0612 22:02:31.040018       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 15:03:47.504196   13752 command_runner.go:130] ! I0612 22:02:31.139849       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0612 15:03:47.506197   13752 logs.go:123] Gathering logs for kube-scheduler [6b021c195669] ...
	I0612 15:03:47.506197   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b021c195669"
	I0612 15:03:47.533859   13752 command_runner.go:130] ! I0612 21:39:26.474423       1 serving.go:380] Generated self-signed cert in-memory
	I0612 15:03:47.533859   13752 command_runner.go:130] ! W0612 21:39:28.263287       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0612 15:03:47.537866   13752 command_runner.go:130] ! W0612 21:39:28.263543       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:47.537866   13752 command_runner.go:130] ! W0612 21:39:28.263706       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0612 15:03:47.537942   13752 command_runner.go:130] ! W0612 21:39:28.263849       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0612 15:03:47.537974   13752 command_runner.go:130] ! I0612 21:39:28.303051       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0612 15:03:47.537974   13752 command_runner.go:130] ! I0612 21:39:28.305840       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:47.538040   13752 command_runner.go:130] ! I0612 21:39:28.310682       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0612 15:03:47.538071   13752 command_runner.go:130] ! I0612 21:39:28.312812       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0612 15:03:47.538071   13752 command_runner.go:130] ! I0612 21:39:28.313421       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0612 15:03:47.538071   13752 command_runner.go:130] ! I0612 21:39:28.313594       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 15:03:47.538907   13752 command_runner.go:130] ! W0612 21:39:28.336905       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:47.539034   13752 command_runner.go:130] ! E0612 21:39:28.337826       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:47.539034   13752 command_runner.go:130] ! W0612 21:39:28.338227       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0612 15:03:47.539034   13752 command_runner.go:130] ! E0612 21:39:28.338391       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0612 15:03:47.539112   13752 command_runner.go:130] ! W0612 21:39:28.338652       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0612 15:03:47.539112   13752 command_runner.go:130] ! E0612 21:39:28.338896       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0612 15:03:47.539112   13752 command_runner.go:130] ! W0612 21:39:28.339195       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0612 15:03:47.539195   13752 command_runner.go:130] ! E0612 21:39:28.339406       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0612 15:03:47.539220   13752 command_runner.go:130] ! W0612 21:39:28.339694       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0612 15:03:47.539267   13752 command_runner.go:130] ! E0612 21:39:28.339892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0612 15:03:47.539267   13752 command_runner.go:130] ! W0612 21:39:28.340188       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0612 15:03:47.539348   13752 command_runner.go:130] ! E0612 21:39:28.340362       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0612 15:03:47.539376   13752 command_runner.go:130] ! W0612 21:39:28.340697       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:47.539376   13752 command_runner.go:130] ! E0612 21:39:28.341129       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:47.539376   13752 command_runner.go:130] ! W0612 21:39:28.341447       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:47.539495   13752 command_runner.go:130] ! E0612 21:39:28.341664       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:47.539495   13752 command_runner.go:130] ! W0612 21:39:28.341989       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0612 15:03:47.539495   13752 command_runner.go:130] ! E0612 21:39:28.342229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0612 15:03:47.539574   13752 command_runner.go:130] ! W0612 21:39:28.342540       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:47.539574   13752 command_runner.go:130] ! E0612 21:39:28.344839       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:47.539668   13752 command_runner.go:130] ! W0612 21:39:28.345316       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0612 15:03:47.539741   13752 command_runner.go:130] ! E0612 21:39:28.347872       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0612 15:03:47.539741   13752 command_runner.go:130] ! W0612 21:39:28.345596       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:47.539859   13752 command_runner.go:130] ! W0612 21:39:28.345651       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0612 15:03:47.539859   13752 command_runner.go:130] ! W0612 21:39:28.345691       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0612 15:03:47.539859   13752 command_runner.go:130] ! W0612 21:39:28.345823       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0612 15:03:47.539966   13752 command_runner.go:130] ! E0612 21:39:28.348490       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:47.539966   13752 command_runner.go:130] ! E0612 21:39:28.348742       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0612 15:03:47.540065   13752 command_runner.go:130] ! E0612 21:39:28.349066       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0612 15:03:47.540065   13752 command_runner.go:130] ! E0612 21:39:28.349147       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0612 15:03:47.540135   13752 command_runner.go:130] ! W0612 21:39:29.192073       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0612 15:03:47.540135   13752 command_runner.go:130] ! E0612 21:39:29.192126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0612 15:03:47.540135   13752 command_runner.go:130] ! W0612 21:39:29.249000       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:47.540217   13752 command_runner.go:130] ! E0612 21:39:29.249248       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:47.540279   13752 command_runner.go:130] ! W0612 21:39:29.268880       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0612 15:03:47.540279   13752 command_runner.go:130] ! E0612 21:39:29.268972       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0612 15:03:47.540279   13752 command_runner.go:130] ! W0612 21:39:29.271696       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:47.540279   13752 command_runner.go:130] ! E0612 21:39:29.271839       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:47.540470   13752 command_runner.go:130] ! W0612 21:39:29.275489       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0612 15:03:47.540610   13752 command_runner.go:130] ! E0612 21:39:29.275551       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0612 15:03:47.540610   13752 command_runner.go:130] ! W0612 21:39:29.296739       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:47.540610   13752 command_runner.go:130] ! E0612 21:39:29.297145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:47.540610   13752 command_runner.go:130] ! W0612 21:39:29.433593       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0612 15:03:47.540610   13752 command_runner.go:130] ! E0612 21:39:29.433887       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0612 15:03:47.540610   13752 command_runner.go:130] ! W0612 21:39:29.471880       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0612 15:03:47.540610   13752 command_runner.go:130] ! E0612 21:39:29.471994       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0612 15:03:47.540610   13752 command_runner.go:130] ! W0612 21:39:29.482669       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:47.540610   13752 command_runner.go:130] ! E0612 21:39:29.483008       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0612 15:03:47.540610   13752 command_runner.go:130] ! W0612 21:39:29.569402       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0612 15:03:47.540610   13752 command_runner.go:130] ! E0612 21:39:29.571433       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0612 15:03:47.540610   13752 command_runner.go:130] ! W0612 21:39:29.677906       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0612 15:03:47.540610   13752 command_runner.go:130] ! E0612 21:39:29.677950       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0612 15:03:47.541146   13752 command_runner.go:130] ! W0612 21:39:29.687951       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0612 15:03:47.541146   13752 command_runner.go:130] ! E0612 21:39:29.688054       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0612 15:03:47.541146   13752 command_runner.go:130] ! W0612 21:39:29.780288       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0612 15:03:47.541146   13752 command_runner.go:130] ! E0612 21:39:29.780411       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0612 15:03:47.541377   13752 command_runner.go:130] ! W0612 21:39:29.832564       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0612 15:03:47.541377   13752 command_runner.go:130] ! E0612 21:39:29.832892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0612 15:03:47.541377   13752 command_runner.go:130] ! W0612 21:39:29.889591       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:47.541453   13752 command_runner.go:130] ! E0612 21:39:29.889868       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 15:03:47.541485   13752 command_runner.go:130] ! I0612 21:39:32.513980       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0612 15:03:47.541522   13752 command_runner.go:130] ! E0612 22:00:01.172050       1 run.go:74] "command failed" err="finished without leader elect"
	I0612 15:03:47.552398   13752 logs.go:123] Gathering logs for container status ...
	I0612 15:03:47.552398   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0612 15:03:47.624621   13752 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0612 15:03:47.624621   13752 command_runner.go:130] > f2a949d407287       8c811b4aec35f                                                                                         11 seconds ago       Running             busybox                   1                   2434f89aefe00       busybox-fc5497c4f-45qqd
	I0612 15:03:47.624621   13752 command_runner.go:130] > 26e5daf354e36       cbb01a7bd410d                                                                                         11 seconds ago       Running             coredns                   1                   986567ef57643       coredns-7db6d8ff4d-vgcxw
	I0612 15:03:47.624621   13752 command_runner.go:130] > 448e057077ddc       6e38f40d628db                                                                                         34 seconds ago       Running             storage-provisioner       2                   5287b61207e62       storage-provisioner
	I0612 15:03:47.624621   13752 command_runner.go:130] > cccfd1e9fef5e       ac1c61439df46                                                                                         About a minute ago   Running             kindnet-cni               1                   a20975d81b350       kindnet-bqlg8
	I0612 15:03:47.624621   13752 command_runner.go:130] > 3546a5c003210       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   5287b61207e62       storage-provisioner
	I0612 15:03:47.624621   13752 command_runner.go:130] > 227a905829b07       747097150317f                                                                                         About a minute ago   Running             kube-proxy                1                   435c56b0fbbbb       kube-proxy-47lr8
	I0612 15:03:47.624621   13752 command_runner.go:130] > 6b61f5f6483d5       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   76517193a960a       etcd-multinode-025000
	I0612 15:03:47.624621   13752 command_runner.go:130] > bbe2d2e51b5f3       91be940803172                                                                                         About a minute ago   Running             kube-apiserver            0                   20cbfb3fb8531       kube-apiserver-multinode-025000
	I0612 15:03:47.624621   13752 command_runner.go:130] > 7acc8ff0a9317       25a1387cdab82                                                                                         About a minute ago   Running             kube-controller-manager   1                   a228f6c30fdf4       kube-controller-manager-multinode-025000
	I0612 15:03:47.624621   13752 command_runner.go:130] > 755750ecd1e39       a52dc94f0a912                                                                                         About a minute ago   Running             kube-scheduler            1                   da184577f0371       kube-scheduler-multinode-025000
	I0612 15:03:47.624621   13752 command_runner.go:130] > bfc0382d49a48       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   84a9b747663ca       busybox-fc5497c4f-45qqd
	I0612 15:03:47.625152   13752 command_runner.go:130] > e83cf4eef49e4       cbb01a7bd410d                                                                                         23 minutes ago       Exited              coredns                   0                   894c58e9fe752       coredns-7db6d8ff4d-vgcxw
	I0612 15:03:47.625152   13752 command_runner.go:130] > 4d60d82f6bc5d       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              23 minutes ago       Exited              kindnet-cni               0                   92f2d5f19e95e       kindnet-bqlg8
	I0612 15:03:47.625152   13752 command_runner.go:130] > c4842faba751e       747097150317f                                                                                         24 minutes ago       Exited              kube-proxy                0                   fad98f611536b       kube-proxy-47lr8
	I0612 15:03:47.625152   13752 command_runner.go:130] > 6b021c195669e       a52dc94f0a912                                                                                         24 minutes ago       Exited              kube-scheduler            0                   d9933fdc9ca72       kube-scheduler-multinode-025000
	I0612 15:03:47.625335   13752 command_runner.go:130] > 685d167da53c9       25a1387cdab82                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   bb4351fab502e       kube-controller-manager-multinode-025000
	I0612 15:03:47.627616   13752 logs.go:123] Gathering logs for kube-proxy [227a905829b0] ...
	I0612 15:03:47.627648   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 227a905829b0"
	I0612 15:03:47.658208   13752 command_runner.go:130] ! I0612 22:02:33.538961       1 server_linux.go:69] "Using iptables proxy"
	I0612 15:03:47.658208   13752 command_runner.go:130] ! I0612 22:02:33.585761       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.200.184"]
	I0612 15:03:47.658208   13752 command_runner.go:130] ! I0612 22:02:33.754056       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 15:03:47.658208   13752 command_runner.go:130] ! I0612 22:02:33.754118       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 15:03:47.658208   13752 command_runner.go:130] ! I0612 22:02:33.754141       1 server_linux.go:165] "Using iptables Proxier"
	I0612 15:03:47.658208   13752 command_runner.go:130] ! I0612 22:02:33.765449       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 15:03:47.658208   13752 command_runner.go:130] ! I0612 22:02:33.766192       1 server.go:872] "Version info" version="v1.30.1"
	I0612 15:03:47.658208   13752 command_runner.go:130] ! I0612 22:02:33.766246       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:47.658208   13752 command_runner.go:130] ! I0612 22:02:33.769980       1 config.go:192] "Starting service config controller"
	I0612 15:03:47.658208   13752 command_runner.go:130] ! I0612 22:02:33.770461       1 config.go:101] "Starting endpoint slice config controller"
	I0612 15:03:47.658208   13752 command_runner.go:130] ! I0612 22:02:33.770493       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 15:03:47.658208   13752 command_runner.go:130] ! I0612 22:02:33.770630       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 15:03:47.658208   13752 command_runner.go:130] ! I0612 22:02:33.773852       1 config.go:319] "Starting node config controller"
	I0612 15:03:47.658208   13752 command_runner.go:130] ! I0612 22:02:33.773944       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 15:03:47.658208   13752 command_runner.go:130] ! I0612 22:02:33.870743       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0612 15:03:47.658208   13752 command_runner.go:130] ! I0612 22:02:33.870698       1 shared_informer.go:320] Caches are synced for service config
	I0612 15:03:47.658208   13752 command_runner.go:130] ! I0612 22:02:33.882534       1 shared_informer.go:320] Caches are synced for node config
	I0612 15:03:47.660659   13752 logs.go:123] Gathering logs for kube-proxy [c4842faba751] ...
	I0612 15:03:47.660659   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4842faba751"
	I0612 15:03:47.678563   13752 command_runner.go:130] ! I0612 21:39:47.407607       1 server_linux.go:69] "Using iptables proxy"
	I0612 15:03:47.678563   13752 command_runner.go:130] ! I0612 21:39:47.423801       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.198.154"]
	I0612 15:03:47.678563   13752 command_runner.go:130] ! I0612 21:39:47.480061       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 15:03:47.678563   13752 command_runner.go:130] ! I0612 21:39:47.480182       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 15:03:47.678563   13752 command_runner.go:130] ! I0612 21:39:47.480205       1 server_linux.go:165] "Using iptables Proxier"
	I0612 15:03:47.678563   13752 command_runner.go:130] ! I0612 21:39:47.484521       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 15:03:47.678563   13752 command_runner.go:130] ! I0612 21:39:47.485171       1 server.go:872] "Version info" version="v1.30.1"
	I0612 15:03:47.678563   13752 command_runner.go:130] ! I0612 21:39:47.485535       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:47.678563   13752 command_runner.go:130] ! I0612 21:39:47.488126       1 config.go:192] "Starting service config controller"
	I0612 15:03:47.678563   13752 command_runner.go:130] ! I0612 21:39:47.488162       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 15:03:47.678563   13752 command_runner.go:130] ! I0612 21:39:47.488188       1 config.go:101] "Starting endpoint slice config controller"
	I0612 15:03:47.678563   13752 command_runner.go:130] ! I0612 21:39:47.488197       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 15:03:47.689867   13752 command_runner.go:130] ! I0612 21:39:47.488969       1 config.go:319] "Starting node config controller"
	I0612 15:03:47.690111   13752 command_runner.go:130] ! I0612 21:39:47.489001       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 15:03:47.690111   13752 command_runner.go:130] ! I0612 21:39:47.588500       1 shared_informer.go:320] Caches are synced for service config
	I0612 15:03:47.690111   13752 command_runner.go:130] ! I0612 21:39:47.588641       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0612 15:03:47.690111   13752 command_runner.go:130] ! I0612 21:39:47.589226       1 shared_informer.go:320] Caches are synced for node config
	I0612 15:03:47.691875   13752 logs.go:123] Gathering logs for kube-controller-manager [685d167da53c] ...
	I0612 15:03:47.691875   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 685d167da53c"
	I0612 15:03:47.717904   13752 command_runner.go:130] ! I0612 21:39:26.275086       1 serving.go:380] Generated self-signed cert in-memory
	I0612 15:03:47.717904   13752 command_runner.go:130] ! I0612 21:39:26.758419       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0612 15:03:47.717904   13752 command_runner.go:130] ! I0612 21:39:26.759036       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 15:03:47.718822   13752 command_runner.go:130] ! I0612 21:39:26.761311       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0612 15:03:47.718822   13752 command_runner.go:130] ! I0612 21:39:26.761663       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0612 15:03:47.718822   13752 command_runner.go:130] ! I0612 21:39:26.762454       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0612 15:03:47.718822   13752 command_runner.go:130] ! I0612 21:39:26.762652       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 15:03:47.718898   13752 command_runner.go:130] ! I0612 21:39:31.260969       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0612 15:03:47.718944   13752 command_runner.go:130] ! I0612 21:39:31.261096       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0612 15:03:47.718944   13752 command_runner.go:130] ! E0612 21:39:31.316508       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0612 15:03:47.718995   13752 command_runner.go:130] ! I0612 21:39:31.316587       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0612 15:03:47.719024   13752 command_runner.go:130] ! I0612 21:39:31.342032       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0612 15:03:47.719024   13752 command_runner.go:130] ! I0612 21:39:31.342287       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0612 15:03:47.719024   13752 command_runner.go:130] ! I0612 21:39:31.342304       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0612 15:03:47.719024   13752 command_runner.go:130] ! I0612 21:39:31.362243       1 shared_informer.go:320] Caches are synced for tokens
	I0612 15:03:47.719093   13752 command_runner.go:130] ! I0612 21:39:31.399024       1 controllermanager.go:761] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0612 15:03:47.719121   13752 command_runner.go:130] ! I0612 21:39:31.399081       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0612 15:03:47.719121   13752 command_runner.go:130] ! I0612 21:39:31.399264       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0612 15:03:47.719121   13752 command_runner.go:130] ! I0612 21:39:31.443376       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0612 15:03:47.719121   13752 command_runner.go:130] ! I0612 21:39:31.443603       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0612 15:03:47.719185   13752 command_runner.go:130] ! I0612 21:39:31.443617       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0612 15:03:47.719185   13752 command_runner.go:130] ! I0612 21:39:31.480477       1 controllermanager.go:761] "Started controller" controller="serviceaccount-controller"
	I0612 15:03:47.719185   13752 command_runner.go:130] ! I0612 21:39:31.480993       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0612 15:03:47.719185   13752 command_runner.go:130] ! I0612 21:39:31.481007       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0612 15:03:47.719244   13752 command_runner.go:130] ! I0612 21:39:31.523943       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0612 15:03:47.719273   13752 command_runner.go:130] ! I0612 21:39:31.524182       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.524535       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.524741       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.553194       1 controllermanager.go:761] "Started controller" controller="replicaset-controller"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.554412       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.556852       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.560273       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.560448       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.561614       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.561933       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.593308       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.593438       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.593459       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.593488       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.593534       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.593588       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.593611       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.593650       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.593684       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.593701       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.593721       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.593739       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.593950       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.594051       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.594202       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.594262       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0612 15:03:47.719304   13752 command_runner.go:130] ! I0612 21:39:31.594286       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0612 15:03:47.719838   13752 command_runner.go:130] ! I0612 21:39:31.594306       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0612 15:03:47.719838   13752 command_runner.go:130] ! I0612 21:39:31.594500       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0612 15:03:47.719838   13752 command_runner.go:130] ! I0612 21:39:31.594602       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0612 15:03:47.719838   13752 command_runner.go:130] ! I0612 21:39:31.594857       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0612 15:03:47.719838   13752 command_runner.go:130] ! I0612 21:39:31.594957       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0612 15:03:47.719838   13752 command_runner.go:130] ! I0612 21:39:31.595276       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0612 15:03:47.719838   13752 command_runner.go:130] ! I0612 21:39:31.595463       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0612 15:03:47.719838   13752 command_runner.go:130] ! I0612 21:39:31.605247       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0612 15:03:47.719977   13752 command_runner.go:130] ! I0612 21:39:31.605722       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0612 15:03:47.719977   13752 command_runner.go:130] ! I0612 21:39:31.607199       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0612 15:03:47.719977   13752 command_runner.go:130] ! I0612 21:39:31.668704       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0612 15:03:47.720054   13752 command_runner.go:130] ! I0612 21:39:31.669329       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0612 15:03:47.720054   13752 command_runner.go:130] ! I0612 21:39:31.669521       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0612 15:03:47.720054   13752 command_runner.go:130] ! I0612 21:39:31.820968       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0612 15:03:47.720054   13752 command_runner.go:130] ! I0612 21:39:31.821104       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0612 15:03:47.720054   13752 command_runner.go:130] ! I0612 21:39:31.821117       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0612 15:03:47.720128   13752 command_runner.go:130] ! I0612 21:39:31.973500       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0612 15:03:47.720128   13752 command_runner.go:130] ! I0612 21:39:31.973543       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0612 15:03:47.720128   13752 command_runner.go:130] ! I0612 21:39:31.975344       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0612 15:03:47.720202   13752 command_runner.go:130] ! I0612 21:39:31.975377       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0612 15:03:47.720227   13752 command_runner.go:130] ! I0612 21:39:32.163715       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0612 15:03:47.720227   13752 command_runner.go:130] ! I0612 21:39:32.163860       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0612 15:03:47.720291   13752 command_runner.go:130] ! I0612 21:39:32.320380       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0612 15:03:47.720314   13752 command_runner.go:130] ! I0612 21:39:32.320516       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0612 15:03:47.720314   13752 command_runner.go:130] ! I0612 21:39:32.320529       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0612 15:03:47.720314   13752 command_runner.go:130] ! I0612 21:39:32.468817       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0612 15:03:47.720382   13752 command_runner.go:130] ! I0612 21:39:32.468893       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0612 15:03:47.720382   13752 command_runner.go:130] ! I0612 21:39:32.636144       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0612 15:03:47.720382   13752 command_runner.go:130] ! I0612 21:39:32.636921       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0612 15:03:47.720382   13752 command_runner.go:130] ! I0612 21:39:32.637331       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0612 15:03:47.720457   13752 command_runner.go:130] ! I0612 21:39:32.775300       1 controllermanager.go:761] "Started controller" controller="ephemeral-volume-controller"
	I0612 15:03:47.720487   13752 command_runner.go:130] ! I0612 21:39:32.776007       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0612 15:03:47.720487   13752 command_runner.go:130] ! I0612 21:39:32.778803       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0612 15:03:47.720487   13752 command_runner.go:130] ! I0612 21:39:32.920254       1 controllermanager.go:761] "Started controller" controller="endpointslice-controller"
	I0612 15:03:47.720487   13752 command_runner.go:130] ! I0612 21:39:32.920359       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0612 15:03:47.720558   13752 command_runner.go:130] ! I0612 21:39:32.920902       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0612 15:03:47.720558   13752 command_runner.go:130] ! I0612 21:39:33.069533       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0612 15:03:47.720618   13752 command_runner.go:130] ! I0612 21:39:33.069689       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0612 15:03:47.720618   13752 command_runner.go:130] ! I0612 21:39:33.069704       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0612 15:03:47.720676   13752 command_runner.go:130] ! I0612 21:39:33.069713       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0612 15:03:47.720676   13752 command_runner.go:130] ! I0612 21:39:33.115693       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0612 15:03:47.720730   13752 command_runner.go:130] ! I0612 21:39:33.115796       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0612 15:03:47.720730   13752 command_runner.go:130] ! I0612 21:39:33.115809       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0612 15:03:47.720730   13752 command_runner.go:130] ! I0612 21:39:33.116021       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0612 15:03:47.720730   13752 command_runner.go:130] ! I0612 21:39:33.116257       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0612 15:03:47.720804   13752 command_runner.go:130] ! I0612 21:39:33.116416       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0612 15:03:47.720829   13752 command_runner.go:130] ! I0612 21:39:33.169481       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:33.169523       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:33.169561       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:33.170619       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:33.170693       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:33.170745       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:33.171426       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:33.171458       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:33.171479       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:33.172032       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:33.172160       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:33.172352       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:33.172295       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:43.229790       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:43.230104       1 controllermanager.go:761] "Started controller" controller="node-ipam-controller"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:43.230715       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:43.230868       1 shared_informer.go:313] Waiting for caches to sync for node
	I0612 15:03:47.720860   13752 command_runner.go:130] ! E0612 21:39:43.246433       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:43.246740       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:43.246878       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:43.247178       1 controllermanager.go:739] "Warning: skipping controller" controller="node-route-controller"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:43.259694       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:43.260105       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:43.260326       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:43.287038       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:43.287747       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:43.289545       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:43.296881       1 controllermanager.go:761] "Started controller" controller="cronjob-controller"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:43.297485       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:43.297679       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0612 15:03:47.720860   13752 command_runner.go:130] ! I0612 21:39:43.315673       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0612 15:03:47.721390   13752 command_runner.go:130] ! I0612 21:39:43.316362       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0612 15:03:47.721390   13752 command_runner.go:130] ! I0612 21:39:43.316724       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0612 15:03:47.721390   13752 command_runner.go:130] ! I0612 21:39:43.331329       1 controllermanager.go:761] "Started controller" controller="ttl-after-finished-controller"
	I0612 15:03:47.721390   13752 command_runner.go:130] ! I0612 21:39:43.331610       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0612 15:03:47.721390   13752 command_runner.go:130] ! I0612 21:39:43.331966       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0612 15:03:47.721390   13752 command_runner.go:130] ! I0612 21:39:43.358081       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0612 15:03:47.721390   13752 command_runner.go:130] ! I0612 21:39:43.358485       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0612 15:03:47.721390   13752 command_runner.go:130] ! I0612 21:39:43.358595       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0612 15:03:47.721547   13752 command_runner.go:130] ! I0612 21:39:43.358609       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0612 15:03:47.721547   13752 command_runner.go:130] ! I0612 21:39:43.373221       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0612 15:03:47.721547   13752 command_runner.go:130] ! I0612 21:39:43.373371       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0612 15:03:47.721547   13752 command_runner.go:130] ! I0612 21:39:43.373388       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0612 15:03:47.721642   13752 command_runner.go:130] ! I0612 21:39:43.386049       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0612 15:03:47.721683   13752 command_runner.go:130] ! I0612 21:39:43.386265       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0612 15:03:47.721683   13752 command_runner.go:130] ! I0612 21:39:43.387457       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0612 15:03:47.721683   13752 command_runner.go:130] ! I0612 21:39:43.473855       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0612 15:03:47.721683   13752 command_runner.go:130] ! I0612 21:39:43.474115       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0612 15:03:47.721755   13752 command_runner.go:130] ! I0612 21:39:43.474421       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0612 15:03:47.721755   13752 command_runner.go:130] ! I0612 21:39:43.622457       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0612 15:03:47.721755   13752 command_runner.go:130] ! I0612 21:39:43.622831       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0612 15:03:47.721820   13752 command_runner.go:130] ! I0612 21:39:43.622950       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0612 15:03:47.721820   13752 command_runner.go:130] ! I0612 21:39:43.776632       1 controllermanager.go:761] "Started controller" controller="replicationcontroller-controller"
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:43.777149       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:43.777203       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:43.923199       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:43.923416       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:43.923557       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.219008       1 controllermanager.go:761] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.219041       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.219093       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.219104       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.375322       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.375879       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.375896       1 shared_informer.go:313] Waiting for caches to sync for job
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.419335       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.419357       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.419672       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.435364       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.441191       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000\" does not exist"
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.456985       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.457052       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.460648       1 shared_informer.go:320] Caches are synced for GC
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.463138       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.469825       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.469846       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.469856       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.471608       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.471748       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.472789       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.474041       1 shared_informer.go:320] Caches are synced for TTL
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.475483       1 shared_informer.go:320] Caches are synced for PVC protection
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.475505       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.476080       1 shared_informer.go:320] Caches are synced for job
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.479252       1 shared_informer.go:320] Caches are synced for ephemeral
	I0612 15:03:47.721876   13752 command_runner.go:130] ! I0612 21:39:44.481788       1 shared_informer.go:320] Caches are synced for service account
	I0612 15:03:47.722476   13752 command_runner.go:130] ! I0612 21:39:44.488300       1 shared_informer.go:320] Caches are synced for persistent volume
	I0612 15:03:47.722476   13752 command_runner.go:130] ! I0612 21:39:44.491059       1 shared_informer.go:320] Caches are synced for namespace
	I0612 15:03:47.722476   13752 command_runner.go:130] ! I0612 21:39:44.499063       1 shared_informer.go:320] Caches are synced for cronjob
	I0612 15:03:47.722476   13752 command_runner.go:130] ! I0612 21:39:44.500304       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0612 15:03:47.722476   13752 command_runner.go:130] ! I0612 21:39:44.507471       1 shared_informer.go:320] Caches are synced for daemon sets
	I0612 15:03:47.722476   13752 command_runner.go:130] ! I0612 21:39:44.525355       1 shared_informer.go:320] Caches are synced for taint
	I0612 15:03:47.722476   13752 command_runner.go:130] ! I0612 21:39:44.525889       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0612 15:03:47.722476   13752 command_runner.go:130] ! I0612 21:39:44.526177       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000"
	I0612 15:03:47.722761   13752 command_runner.go:130] ! I0612 21:39:44.526390       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0612 15:03:47.722761   13752 command_runner.go:130] ! I0612 21:39:44.526550       1 shared_informer.go:320] Caches are synced for HPA
	I0612 15:03:47.722761   13752 command_runner.go:130] ! I0612 21:39:44.526951       1 shared_informer.go:320] Caches are synced for stateful set
	I0612 15:03:47.722761   13752 command_runner.go:130] ! I0612 21:39:44.527038       1 shared_informer.go:320] Caches are synced for deployment
	I0612 15:03:47.722834   13752 command_runner.go:130] ! I0612 21:39:44.528601       1 shared_informer.go:320] Caches are synced for PV protection
	I0612 15:03:47.722861   13752 command_runner.go:130] ! I0612 21:39:44.528834       1 shared_informer.go:320] Caches are synced for crt configmap
	I0612 15:03:47.722861   13752 command_runner.go:130] ! I0612 21:39:44.531261       1 shared_informer.go:320] Caches are synced for node
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:44.531462       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:44.531679       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:44.531942       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:44.532097       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:44.532523       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:44.537873       1 shared_informer.go:320] Caches are synced for expand
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:44.543447       1 shared_informer.go:320] Caches are synced for attach detach
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:44.564610       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:44.568950       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025000" podCIDRs=["10.244.0.0/24"]
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:44.621264       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:44.644803       1 shared_informer.go:320] Caches are synced for endpoint
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:44.677466       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:44.696400       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:44.723303       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:44.735837       1 shared_informer.go:320] Caches are synced for resource quota
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:44.758870       1 shared_informer.go:320] Caches are synced for disruption
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:45.157877       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:45.226557       1 shared_informer.go:320] Caches are synced for garbage collector
	I0612 15:03:47.722894   13752 command_runner.go:130] ! I0612 21:39:45.226973       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0612 15:03:47.723446   13752 command_runner.go:130] ! I0612 21:39:45.795416       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="243.746414ms"
	I0612 15:03:47.723446   13752 command_runner.go:130] ! I0612 21:39:45.868449       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.90937ms"
	I0612 15:03:47.723446   13752 command_runner.go:130] ! I0612 21:39:45.868845       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="122.402µs"
	I0612 15:03:47.723446   13752 command_runner.go:130] ! I0612 21:39:45.869382       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="206.903µs"
	I0612 15:03:47.723446   13752 command_runner.go:130] ! I0612 21:39:45.905402       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="386.807µs"
	I0612 15:03:47.723572   13752 command_runner.go:130] ! I0612 21:39:46.349409       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="105.452815ms"
	I0612 15:03:47.723599   13752 command_runner.go:130] ! I0612 21:39:46.386321       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.301621ms"
	I0612 15:03:47.723599   13752 command_runner.go:130] ! I0612 21:39:46.386974       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="616.309µs"
	I0612 15:03:47.723599   13752 command_runner.go:130] ! I0612 21:39:56.441072       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="366.601µs"
	I0612 15:03:47.723685   13752 command_runner.go:130] ! I0612 21:39:56.465727       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.4µs"
	I0612 15:03:47.723685   13752 command_runner.go:130] ! I0612 21:39:57.870560       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="68.5µs"
	I0612 15:03:47.723685   13752 command_runner.go:130] ! I0612 21:39:58.874445       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.448319ms"
	I0612 15:03:47.723780   13752 command_runner.go:130] ! I0612 21:39:58.875168       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="103.901µs"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:39:59.529553       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:42:39.169243       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m02\" does not exist"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:42:39.188142       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025000-m02" podCIDRs=["10.244.1.0/24"]
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:42:39.563565       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000-m02"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:42:58.063730       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:43:24.138579       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.052538ms"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:43:24.156190       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.434267ms"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:43:24.156677       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.099µs"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:43:24.183391       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.299µs"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:43:26.908415       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.051448ms"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:43:26.908853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34µs"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:43:27.296932       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.474956ms"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:43:27.304566       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.488944ms"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:47:16.485552       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:47:16.486568       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m03\" does not exist"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:47:16.503987       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025000-m03" podCIDRs=["10.244.2.0/24"]
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:47:19.629018       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000-m03"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:47:35.032365       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:55:19.767980       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:57:52.374240       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:57:58.774442       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m03\" does not exist"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:57:58.774588       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:47.723832   13752 command_runner.go:130] ! I0612 21:57:58.809041       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025000-m03" podCIDRs=["10.244.3.0/24"]
	I0612 15:03:47.724355   13752 command_runner.go:130] ! I0612 21:58:06.126407       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:47.724355   13752 command_runner.go:130] ! I0612 21:59:45.222238       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 15:03:47.738653   13752 logs.go:123] Gathering logs for describe nodes ...
	I0612 15:03:47.738653   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0612 15:03:47.956842   13752 command_runner.go:130] > Name:               multinode-025000
	I0612 15:03:47.956842   13752 command_runner.go:130] > Roles:              control-plane
	I0612 15:03:47.956842   13752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0612 15:03:47.956842   13752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0612 15:03:47.956842   13752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0612 15:03:47.956842   13752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-025000
	I0612 15:03:47.956842   13752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0612 15:03:47.956842   13752 command_runner.go:130] >                     minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97
	I0612 15:03:47.956842   13752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-025000
	I0612 15:03:47.956842   13752 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0612 15:03:47.956842   13752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_12T14_39_32_0700
	I0612 15:03:47.956842   13752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0612 15:03:47.956842   13752 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0612 15:03:47.956842   13752 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0612 15:03:47.956842   13752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0612 15:03:47.956842   13752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0612 15:03:47.956842   13752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0612 15:03:47.956842   13752 command_runner.go:130] > CreationTimestamp:  Wed, 12 Jun 2024 21:39:28 +0000
	I0612 15:03:47.956842   13752 command_runner.go:130] > Taints:             <none>
	I0612 15:03:47.956842   13752 command_runner.go:130] > Unschedulable:      false
	I0612 15:03:47.956842   13752 command_runner.go:130] > Lease:
	I0612 15:03:47.956842   13752 command_runner.go:130] >   HolderIdentity:  multinode-025000
	I0612 15:03:47.956842   13752 command_runner.go:130] >   AcquireTime:     <unset>
	I0612 15:03:47.956842   13752 command_runner.go:130] >   RenewTime:       Wed, 12 Jun 2024 22:03:42 +0000
	I0612 15:03:47.956842   13752 command_runner.go:130] > Conditions:
	I0612 15:03:47.956842   13752 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0612 15:03:47.956842   13752 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0612 15:03:47.956842   13752 command_runner.go:130] >   MemoryPressure   False   Wed, 12 Jun 2024 22:03:11 +0000   Wed, 12 Jun 2024 21:39:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0612 15:03:47.956842   13752 command_runner.go:130] >   DiskPressure     False   Wed, 12 Jun 2024 22:03:11 +0000   Wed, 12 Jun 2024 21:39:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0612 15:03:47.956842   13752 command_runner.go:130] >   PIDPressure      False   Wed, 12 Jun 2024 22:03:11 +0000   Wed, 12 Jun 2024 21:39:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0612 15:03:47.956842   13752 command_runner.go:130] >   Ready            True    Wed, 12 Jun 2024 22:03:11 +0000   Wed, 12 Jun 2024 22:03:11 +0000   KubeletReady                 kubelet is posting ready status
	I0612 15:03:47.956842   13752 command_runner.go:130] > Addresses:
	I0612 15:03:47.957371   13752 command_runner.go:130] >   InternalIP:  172.23.200.184
	I0612 15:03:47.957371   13752 command_runner.go:130] >   Hostname:    multinode-025000
	I0612 15:03:47.957371   13752 command_runner.go:130] > Capacity:
	I0612 15:03:47.957371   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:47.957371   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:47.957371   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:47.957371   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:47.957500   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:47.957500   13752 command_runner.go:130] > Allocatable:
	I0612 15:03:47.957500   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:47.957500   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:47.957500   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:47.957500   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:47.957500   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:47.957500   13752 command_runner.go:130] > System Info:
	I0612 15:03:47.957556   13752 command_runner.go:130] >   Machine ID:                 e65e28dfa5bf4f27a0123e4ae1007793
	I0612 15:03:47.957556   13752 command_runner.go:130] >   System UUID:                3e5a42d3-ea80-0c4d-ad18-4b76e4f3e22f
	I0612 15:03:47.957556   13752 command_runner.go:130] >   Boot ID:                    0efecf43-b070-4a8f-b542-4d1fd07306ad
	I0612 15:03:47.957601   13752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0612 15:03:47.957601   13752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0612 15:03:47.957601   13752 command_runner.go:130] >   Operating System:           linux
	I0612 15:03:47.957601   13752 command_runner.go:130] >   Architecture:               amd64
	I0612 15:03:47.957601   13752 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0612 15:03:47.957693   13752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0612 15:03:47.957693   13752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0612 15:03:47.957693   13752 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0612 15:03:47.957735   13752 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0612 15:03:47.957735   13752 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0612 15:03:47.957735   13752 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0612 15:03:47.957773   13752 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0612 15:03:47.957773   13752 command_runner.go:130] >   default                     busybox-fc5497c4f-45qqd                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0612 15:03:47.957814   13752 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-vgcxw                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     24m
	I0612 15:03:47.957814   13752 command_runner.go:130] >   kube-system                 etcd-multinode-025000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         76s
	I0612 15:03:47.957850   13752 command_runner.go:130] >   kube-system                 kindnet-bqlg8                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	I0612 15:03:47.957850   13752 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-025000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	I0612 15:03:47.957891   13752 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-025000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0612 15:03:47.957927   13752 command_runner.go:130] >   kube-system                 kube-proxy-47lr8                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0612 15:03:47.957927   13752 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-025000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0612 15:03:47.957967   13752 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0612 15:03:47.957967   13752 command_runner.go:130] > Allocated resources:
	I0612 15:03:47.958003   13752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0612 15:03:47.958003   13752 command_runner.go:130] >   Resource           Requests     Limits
	I0612 15:03:47.958003   13752 command_runner.go:130] >   --------           --------     ------
	I0612 15:03:47.958043   13752 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0612 15:03:47.958043   13752 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0612 15:03:47.958043   13752 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0612 15:03:47.958078   13752 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0612 15:03:47.958078   13752 command_runner.go:130] > Events:
	I0612 15:03:47.958135   13752 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0612 15:03:47.958135   13752 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0612 15:03:47.958135   13752 command_runner.go:130] >   Normal  Starting                 24m                kube-proxy       
	I0612 15:03:47.958171   13752 command_runner.go:130] >   Normal  Starting                 74s                kube-proxy       
	I0612 15:03:47.958171   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node multinode-025000 status is now: NodeHasSufficientMemory
	I0612 15:03:47.958211   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node multinode-025000 status is now: NodeHasNoDiskPressure
	I0612 15:03:47.958239   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node multinode-025000 status is now: NodeHasSufficientPID
	I0612 15:03:47.958239   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:47.958239   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-025000 status is now: NodeHasNoDiskPressure
	I0612 15:03:47.958239   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-025000 status is now: NodeHasSufficientMemory
	I0612 15:03:47.958239   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-025000 status is now: NodeHasSufficientPID
	I0612 15:03:47.958239   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:47.958239   13752 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0612 15:03:47.958239   13752 command_runner.go:130] >   Normal  RegisteredNode           24m                node-controller  Node multinode-025000 event: Registered Node multinode-025000 in Controller
	I0612 15:03:47.958239   13752 command_runner.go:130] >   Normal  NodeReady                23m                kubelet          Node multinode-025000 status is now: NodeReady
	I0612 15:03:47.958239   13752 command_runner.go:130] >   Normal  Starting                 82s                kubelet          Starting kubelet.
	I0612 15:03:47.958239   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  82s (x8 over 82s)  kubelet          Node multinode-025000 status is now: NodeHasSufficientMemory
	I0612 15:03:47.958239   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    82s (x8 over 82s)  kubelet          Node multinode-025000 status is now: NodeHasNoDiskPressure
	I0612 15:03:47.958239   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     82s (x7 over 82s)  kubelet          Node multinode-025000 status is now: NodeHasSufficientPID
	I0612 15:03:47.958239   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  82s                kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:47.958239   13752 command_runner.go:130] >   Normal  RegisteredNode           63s                node-controller  Node multinode-025000 event: Registered Node multinode-025000 in Controller
	I0612 15:03:47.958239   13752 command_runner.go:130] > Name:               multinode-025000-m02
	I0612 15:03:47.958239   13752 command_runner.go:130] > Roles:              <none>
	I0612 15:03:47.958239   13752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0612 15:03:47.958239   13752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0612 15:03:47.958239   13752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0612 15:03:47.958239   13752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-025000-m02
	I0612 15:03:47.958239   13752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0612 15:03:47.958239   13752 command_runner.go:130] >                     minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97
	I0612 15:03:47.958239   13752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-025000
	I0612 15:03:47.958239   13752 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0612 15:03:47.958239   13752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_12T14_42_39_0700
	I0612 15:03:47.958239   13752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0612 15:03:47.958239   13752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0612 15:03:47.958239   13752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0612 15:03:47.958239   13752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0612 15:03:47.958239   13752 command_runner.go:130] > CreationTimestamp:  Wed, 12 Jun 2024 21:42:39 +0000
	I0612 15:03:47.958239   13752 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0612 15:03:47.958239   13752 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0612 15:03:47.958239   13752 command_runner.go:130] > Unschedulable:      false
	I0612 15:03:47.958239   13752 command_runner.go:130] > Lease:
	I0612 15:03:47.958239   13752 command_runner.go:130] >   HolderIdentity:  multinode-025000-m02
	I0612 15:03:47.958239   13752 command_runner.go:130] >   AcquireTime:     <unset>
	I0612 15:03:47.958239   13752 command_runner.go:130] >   RenewTime:       Wed, 12 Jun 2024 21:59:20 +0000
	I0612 15:03:47.958239   13752 command_runner.go:130] > Conditions:
	I0612 15:03:47.958762   13752 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0612 15:03:47.958762   13752 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0612 15:03:47.958820   13752 command_runner.go:130] >   MemoryPressure   Unknown   Wed, 12 Jun 2024 21:58:59 +0000   Wed, 12 Jun 2024 22:03:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:47.958820   13752 command_runner.go:130] >   DiskPressure     Unknown   Wed, 12 Jun 2024 21:58:59 +0000   Wed, 12 Jun 2024 22:03:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:47.958820   13752 command_runner.go:130] >   PIDPressure      Unknown   Wed, 12 Jun 2024 21:58:59 +0000   Wed, 12 Jun 2024 22:03:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:47.958820   13752 command_runner.go:130] >   Ready            Unknown   Wed, 12 Jun 2024 21:58:59 +0000   Wed, 12 Jun 2024 22:03:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:47.958820   13752 command_runner.go:130] > Addresses:
	I0612 15:03:47.958820   13752 command_runner.go:130] >   InternalIP:  172.23.196.105
	I0612 15:03:47.958820   13752 command_runner.go:130] >   Hostname:    multinode-025000-m02
	I0612 15:03:47.958820   13752 command_runner.go:130] > Capacity:
	I0612 15:03:47.958820   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:47.958820   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:47.958820   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:47.958820   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:47.958820   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:47.958820   13752 command_runner.go:130] > Allocatable:
	I0612 15:03:47.958820   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:47.958820   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:47.958820   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:47.958820   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:47.958820   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:47.958820   13752 command_runner.go:130] > System Info:
	I0612 15:03:47.958820   13752 command_runner.go:130] >   Machine ID:                 c11d7ff5518449f8bc8169a1fd7b0c4b
	I0612 15:03:47.958820   13752 command_runner.go:130] >   System UUID:                3b021c48-8479-f34c-83c2-77b944a77c5e
	I0612 15:03:47.958820   13752 command_runner.go:130] >   Boot ID:                    67e77c09-c6b2-4c01-b167-2481dd4a7a96
	I0612 15:03:47.958820   13752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0612 15:03:47.958820   13752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0612 15:03:47.958820   13752 command_runner.go:130] >   Operating System:           linux
	I0612 15:03:47.958820   13752 command_runner.go:130] >   Architecture:               amd64
	I0612 15:03:47.958820   13752 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0612 15:03:47.958820   13752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0612 15:03:47.958820   13752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0612 15:03:47.958820   13752 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0612 15:03:47.958820   13752 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0612 15:03:47.958820   13752 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0612 15:03:47.958820   13752 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0612 15:03:47.958820   13752 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0612 15:03:47.958820   13752 command_runner.go:130] >   default                     busybox-fc5497c4f-9bsls    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0612 15:03:47.958820   13752 command_runner.go:130] >   kube-system                 kindnet-v4cqk              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21m
	I0612 15:03:47.958820   13752 command_runner.go:130] >   kube-system                 kube-proxy-tdcdp           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0612 15:03:47.958820   13752 command_runner.go:130] > Allocated resources:
	I0612 15:03:47.958820   13752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0612 15:03:47.958820   13752 command_runner.go:130] >   Resource           Requests   Limits
	I0612 15:03:47.958820   13752 command_runner.go:130] >   --------           --------   ------
	I0612 15:03:47.958820   13752 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0612 15:03:47.958820   13752 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0612 15:03:47.958820   13752 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0612 15:03:47.958820   13752 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0612 15:03:47.958820   13752 command_runner.go:130] > Events:
	I0612 15:03:47.958820   13752 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0612 15:03:47.958820   13752 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0612 15:03:47.958820   13752 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0612 15:03:47.959359   13752 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-025000-m02 event: Registered Node multinode-025000-m02 in Controller
	I0612 15:03:47.959359   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-025000-m02 status is now: NodeHasSufficientMemory
	I0612 15:03:47.959359   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-025000-m02 status is now: NodeHasNoDiskPressure
	I0612 15:03:47.959359   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-025000-m02 status is now: NodeHasSufficientPID
	I0612 15:03:47.959456   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:47.959456   13752 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-025000-m02 status is now: NodeReady
	I0612 15:03:47.959498   13752 command_runner.go:130] >   Normal  RegisteredNode           63s                node-controller  Node multinode-025000-m02 event: Registered Node multinode-025000-m02 in Controller
	I0612 15:03:47.959526   13752 command_runner.go:130] >   Normal  NodeNotReady             23s                node-controller  Node multinode-025000-m02 status is now: NodeNotReady
	I0612 15:03:47.959526   13752 command_runner.go:130] > Name:               multinode-025000-m03
	I0612 15:03:47.959566   13752 command_runner.go:130] > Roles:              <none>
	I0612 15:03:47.959566   13752 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0612 15:03:47.959566   13752 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0612 15:03:47.959566   13752 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0612 15:03:47.959566   13752 command_runner.go:130] >                     kubernetes.io/hostname=multinode-025000-m03
	I0612 15:03:47.959566   13752 command_runner.go:130] >                     kubernetes.io/os=linux
	I0612 15:03:47.959639   13752 command_runner.go:130] >                     minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97
	I0612 15:03:47.959639   13752 command_runner.go:130] >                     minikube.k8s.io/name=multinode-025000
	I0612 15:03:47.959682   13752 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0612 15:03:47.959682   13752 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_06_12T14_57_59_0700
	I0612 15:03:47.959682   13752 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.1
	I0612 15:03:47.959742   13752 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0612 15:03:47.959742   13752 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0612 15:03:47.959781   13752 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0612 15:03:47.959781   13752 command_runner.go:130] > CreationTimestamp:  Wed, 12 Jun 2024 21:57:58 +0000
	I0612 15:03:47.959781   13752 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0612 15:03:47.959861   13752 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0612 15:03:47.959861   13752 command_runner.go:130] > Unschedulable:      false
	I0612 15:03:47.959861   13752 command_runner.go:130] > Lease:
	I0612 15:03:47.959903   13752 command_runner.go:130] >   HolderIdentity:  multinode-025000-m03
	I0612 15:03:47.959903   13752 command_runner.go:130] >   AcquireTime:     <unset>
	I0612 15:03:47.959903   13752 command_runner.go:130] >   RenewTime:       Wed, 12 Jun 2024 21:59:00 +0000
	I0612 15:03:47.959903   13752 command_runner.go:130] > Conditions:
	I0612 15:03:47.959903   13752 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0612 15:03:47.959903   13752 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0612 15:03:47.959903   13752 command_runner.go:130] >   MemoryPressure   Unknown   Wed, 12 Jun 2024 21:58:06 +0000   Wed, 12 Jun 2024 21:59:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:47.959903   13752 command_runner.go:130] >   DiskPressure     Unknown   Wed, 12 Jun 2024 21:58:06 +0000   Wed, 12 Jun 2024 21:59:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:47.959903   13752 command_runner.go:130] >   PIDPressure      Unknown   Wed, 12 Jun 2024 21:58:06 +0000   Wed, 12 Jun 2024 21:59:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:47.959903   13752 command_runner.go:130] >   Ready            Unknown   Wed, 12 Jun 2024 21:58:06 +0000   Wed, 12 Jun 2024 21:59:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0612 15:03:47.959903   13752 command_runner.go:130] > Addresses:
	I0612 15:03:47.959903   13752 command_runner.go:130] >   InternalIP:  172.23.206.72
	I0612 15:03:47.959903   13752 command_runner.go:130] >   Hostname:    multinode-025000-m03
	I0612 15:03:47.959903   13752 command_runner.go:130] > Capacity:
	I0612 15:03:47.959903   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:47.959903   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:47.959903   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:47.959903   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:47.959903   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:47.959903   13752 command_runner.go:130] > Allocatable:
	I0612 15:03:47.959903   13752 command_runner.go:130] >   cpu:                2
	I0612 15:03:47.959903   13752 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0612 15:03:47.959903   13752 command_runner.go:130] >   hugepages-2Mi:      0
	I0612 15:03:47.959903   13752 command_runner.go:130] >   memory:             2164264Ki
	I0612 15:03:47.959903   13752 command_runner.go:130] >   pods:               110
	I0612 15:03:47.959903   13752 command_runner.go:130] > System Info:
	I0612 15:03:47.959903   13752 command_runner.go:130] >   Machine ID:                 b62d5e6740dc42d880d6595ac7dd57ae
	I0612 15:03:47.959903   13752 command_runner.go:130] >   System UUID:                31a13a9b-b7c6-6643-8352-fb322079216a
	I0612 15:03:47.959903   13752 command_runner.go:130] >   Boot ID:                    a21b9eff-2471-4589-9e35-5845aae64358
	I0612 15:03:47.959903   13752 command_runner.go:130] >   Kernel Version:             5.10.207
	I0612 15:03:47.959903   13752 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0612 15:03:47.959903   13752 command_runner.go:130] >   Operating System:           linux
	I0612 15:03:47.959903   13752 command_runner.go:130] >   Architecture:               amd64
	I0612 15:03:47.959903   13752 command_runner.go:130] >   Container Runtime Version:  docker://26.1.4
	I0612 15:03:47.959903   13752 command_runner.go:130] >   Kubelet Version:            v1.30.1
	I0612 15:03:47.959903   13752 command_runner.go:130] >   Kube-Proxy Version:         v1.30.1
	I0612 15:03:47.959903   13752 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0612 15:03:47.959903   13752 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0612 15:03:47.959903   13752 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0612 15:03:47.959903   13752 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0612 15:03:47.959903   13752 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0612 15:03:47.959903   13752 command_runner.go:130] >   kube-system                 kindnet-8252q       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	I0612 15:03:47.959903   13752 command_runner.go:130] >   kube-system                 kube-proxy-7jwdg    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	I0612 15:03:47.960458   13752 command_runner.go:130] > Allocated resources:
	I0612 15:03:47.960458   13752 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0612 15:03:47.960458   13752 command_runner.go:130] >   Resource           Requests   Limits
	I0612 15:03:47.960458   13752 command_runner.go:130] >   --------           --------   ------
	I0612 15:03:47.960458   13752 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0612 15:03:47.960458   13752 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0612 15:03:47.960458   13752 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0612 15:03:47.960458   13752 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0612 15:03:47.960458   13752 command_runner.go:130] > Events:
	I0612 15:03:47.960458   13752 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0612 15:03:47.960609   13752 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0612 15:03:47.960609   13752 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0612 15:03:47.960628   13752 command_runner.go:130] >   Normal  Starting                 5m46s                  kube-proxy       
	I0612 15:03:47.960628   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:47.960628   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-025000-m03 status is now: NodeHasSufficientMemory
	I0612 15:03:47.960698   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-025000-m03 status is now: NodeHasNoDiskPressure
	I0612 15:03:47.960698   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-025000-m03 status is now: NodeHasSufficientPID
	I0612 15:03:47.960698   13752 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-025000-m03 status is now: NodeReady
	I0612 15:03:47.960758   13752 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m49s (x2 over 5m49s)  kubelet          Node multinode-025000-m03 status is now: NodeHasSufficientMemory
	I0612 15:03:47.960780   13752 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m49s (x2 over 5m49s)  kubelet          Node multinode-025000-m03 status is now: NodeHasNoDiskPressure
	I0612 15:03:47.960808   13752 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m49s (x2 over 5m49s)  kubelet          Node multinode-025000-m03 status is now: NodeHasSufficientPID
	I0612 15:03:47.960808   13752 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m49s                  kubelet          Updated Node Allocatable limit across pods
	I0612 15:03:47.960808   13752 command_runner.go:130] >   Normal  RegisteredNode           5m48s                  node-controller  Node multinode-025000-m03 event: Registered Node multinode-025000-m03 in Controller
	I0612 15:03:47.960808   13752 command_runner.go:130] >   Normal  NodeReady                5m41s                  kubelet          Node multinode-025000-m03 status is now: NodeReady
	I0612 15:03:47.960808   13752 command_runner.go:130] >   Normal  NodeNotReady             4m2s                   node-controller  Node multinode-025000-m03 status is now: NodeNotReady
	I0612 15:03:47.960808   13752 command_runner.go:130] >   Normal  RegisteredNode           63s                    node-controller  Node multinode-025000-m03 event: Registered Node multinode-025000-m03 in Controller
	I0612 15:03:47.971911   13752 logs.go:123] Gathering logs for coredns [26e5daf354e3] ...
	I0612 15:03:47.971911   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26e5daf354e3"
	I0612 15:03:48.001521   13752 command_runner.go:130] > .:53
	I0612 15:03:48.001615   13752 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 9f7dc1bade6b5769fb289c890c4bc60268e74645c2ad6eb7d326d3f775fd92cb51f1ac39274894772e6760c31275de0003978af82f0f289ef8d45827e8140e48
	I0612 15:03:48.001666   13752 command_runner.go:130] > CoreDNS-1.11.1
	I0612 15:03:48.001666   13752 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0612 15:03:48.001666   13752 command_runner.go:130] > [INFO] 127.0.0.1:54952 - 9035 "HINFO IN 225709527310201015.7757756956422223857. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.039110892s
	I0612 15:03:48.001931   13752 logs.go:123] Gathering logs for coredns [e83cf4eef49e] ...
	I0612 15:03:48.001931   13752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83cf4eef49e"
	I0612 15:03:48.030949   13752 command_runner.go:130] > .:53
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 9f7dc1bade6b5769fb289c890c4bc60268e74645c2ad6eb7d326d3f775fd92cb51f1ac39274894772e6760c31275de0003978af82f0f289ef8d45827e8140e48
	I0612 15:03:48.034281   13752 command_runner.go:130] > CoreDNS-1.11.1
	I0612 15:03:48.034281   13752 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 127.0.0.1:53490 - 39118 "HINFO IN 4677201826540465335.2322207397622737457. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.048277073s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:49256 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000267302s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:54623 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.08558s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:51804 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.048771085s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:53027 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.100151983s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:34534 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001199s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:44985 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000141701s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:54544 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.0000543s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:55517 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000123601s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:42995 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099501s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:51839 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.135718274s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:52123 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000304602s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:36740 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000274801s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:48333 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.003287018s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:55754 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000962s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:51695 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000224102s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:49605 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096301s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:37746 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000283001s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:54995 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000106501s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:49201 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077401s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:60577 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077201s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:36057 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000107301s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:43898 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:49177 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091201s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:45207 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000584s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:36676 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151001s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:60305 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000305802s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:37468 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000209201s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:34743 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125201s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:45035 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000240801s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:42306 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000309601s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:36509 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000152901s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:55614 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000545s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:39195 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130301s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:34618 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000272902s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:44444 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000177201s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.0.3:35691 - 5 "PTR IN 1.192.23.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001307s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:51174 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110501s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:41925 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000207401s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:44306 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000736s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] 10.244.1.2:46158 - 5 "PTR IN 1.192.23.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000547s
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0612 15:03:48.034281   13752 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0612 15:03:50.538167   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods
	I0612 15:03:50.538424   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:50.538424   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:50.538424   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:50.543025   13752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 15:03:50.543816   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:50.543816   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:50.543816   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:50.543816   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:50.543859   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:50.543859   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:50 GMT
	I0612 15:03:50.543859   13752 round_trippers.go:580]     Audit-Id: 20076492-16ea-4c7d-80f5-0f9ff68b238a
	I0612 15:03:50.546067   13752 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1991"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1975","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86610 chars]
	I0612 15:03:50.550031   13752 system_pods.go:59] 12 kube-system pods found
	I0612 15:03:50.550031   13752 system_pods.go:61] "coredns-7db6d8ff4d-vgcxw" [c5bd143a-d39e-46af-9308-0a97bb45729c] Running
	I0612 15:03:50.550031   13752 system_pods.go:61] "etcd-multinode-025000" [be41c4a6-88ce-4e08-9b7c-16c0b4f3eec2] Running
	I0612 15:03:50.550031   13752 system_pods.go:61] "kindnet-8252q" [b1c2b9b3-0fd6-4393-b818-e7e823f89acc] Running
	I0612 15:03:50.550031   13752 system_pods.go:61] "kindnet-bqlg8" [1f004a05-3f5f-444b-9ac0-88f0e23da904] Running
	I0612 15:03:50.550031   13752 system_pods.go:61] "kindnet-v4cqk" [31faf6fc-5371-4f19-b71f-0a41b6dd2f79] Running
	I0612 15:03:50.550031   13752 system_pods.go:61] "kube-apiserver-multinode-025000" [63e55411-d432-4e5a-becc-fae0887fecae] Running
	I0612 15:03:50.550031   13752 system_pods.go:61] "kube-controller-manager-multinode-025000" [68c9aa4f-49ee-439c-ad51-7943e65c0085] Running
	I0612 15:03:50.550031   13752 system_pods.go:61] "kube-proxy-47lr8" [10b24fa7-8eea-4fbb-ab18-404e853aa7ab] Running
	I0612 15:03:50.550031   13752 system_pods.go:61] "kube-proxy-7jwdg" [643030f7-b876-4243-bacc-04205e88cc9e] Running
	I0612 15:03:50.550031   13752 system_pods.go:61] "kube-proxy-tdcdp" [b623833c-ce55-46b1-a840-99b3143adac1] Running
	I0612 15:03:50.550031   13752 system_pods.go:61] "kube-scheduler-multinode-025000" [83b272cb-1286-47d8-bcb1-a66056dff2a5] Running
	I0612 15:03:50.550031   13752 system_pods.go:61] "storage-provisioner" [d20f7489-1aa1-44b8-9221-4d1849884be4] Running
	I0612 15:03:50.550031   13752 system_pods.go:74] duration metric: took 3.7032455s to wait for pod list to return data ...
	I0612 15:03:50.550031   13752 default_sa.go:34] waiting for default service account to be created ...
	I0612 15:03:50.550031   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/default/serviceaccounts
	I0612 15:03:50.550615   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:50.550615   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:50.550615   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:50.553838   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:03:50.553838   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:50.553838   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:50.553838   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:50.553838   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:50.553838   13752 round_trippers.go:580]     Content-Length: 262
	I0612 15:03:50.553838   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:50 GMT
	I0612 15:03:50.553838   13752 round_trippers.go:580]     Audit-Id: 202f821f-e89b-4e4d-b971-1caa3bb2ae61
	I0612 15:03:50.553838   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:50.553838   13752 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1991"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"876e1679-16ec-44bf-9460-cce6ea3acbf0","resourceVersion":"355","creationTimestamp":"2024-06-12T21:39:45Z"}}]}
	I0612 15:03:50.554578   13752 default_sa.go:45] found service account: "default"
	I0612 15:03:50.554602   13752 default_sa.go:55] duration metric: took 4.5712ms for default service account to be created ...
	I0612 15:03:50.554602   13752 system_pods.go:116] waiting for k8s-apps to be running ...
	I0612 15:03:50.554720   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods
	I0612 15:03:50.554720   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:50.554720   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:50.554720   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:50.557111   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:03:50.557111   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:50.557111   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:50.557111   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:50.557111   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:50 GMT
	I0612 15:03:50.557111   13752 round_trippers.go:580]     Audit-Id: 0f43bd1a-277f-471c-b3f3-7b6b2e3218b1
	I0612 15:03:50.557111   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:50.557111   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:50.561281   13752 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1991"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1975","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86610 chars]
	I0612 15:03:50.565895   13752 system_pods.go:86] 12 kube-system pods found
	I0612 15:03:50.565895   13752 system_pods.go:89] "coredns-7db6d8ff4d-vgcxw" [c5bd143a-d39e-46af-9308-0a97bb45729c] Running
	I0612 15:03:50.565895   13752 system_pods.go:89] "etcd-multinode-025000" [be41c4a6-88ce-4e08-9b7c-16c0b4f3eec2] Running
	I0612 15:03:50.565988   13752 system_pods.go:89] "kindnet-8252q" [b1c2b9b3-0fd6-4393-b818-e7e823f89acc] Running
	I0612 15:03:50.565988   13752 system_pods.go:89] "kindnet-bqlg8" [1f004a05-3f5f-444b-9ac0-88f0e23da904] Running
	I0612 15:03:50.565988   13752 system_pods.go:89] "kindnet-v4cqk" [31faf6fc-5371-4f19-b71f-0a41b6dd2f79] Running
	I0612 15:03:50.565988   13752 system_pods.go:89] "kube-apiserver-multinode-025000" [63e55411-d432-4e5a-becc-fae0887fecae] Running
	I0612 15:03:50.565988   13752 system_pods.go:89] "kube-controller-manager-multinode-025000" [68c9aa4f-49ee-439c-ad51-7943e65c0085] Running
	I0612 15:03:50.565988   13752 system_pods.go:89] "kube-proxy-47lr8" [10b24fa7-8eea-4fbb-ab18-404e853aa7ab] Running
	I0612 15:03:50.565988   13752 system_pods.go:89] "kube-proxy-7jwdg" [643030f7-b876-4243-bacc-04205e88cc9e] Running
	I0612 15:03:50.566109   13752 system_pods.go:89] "kube-proxy-tdcdp" [b623833c-ce55-46b1-a840-99b3143adac1] Running
	I0612 15:03:50.566109   13752 system_pods.go:89] "kube-scheduler-multinode-025000" [83b272cb-1286-47d8-bcb1-a66056dff2a5] Running
	I0612 15:03:50.566109   13752 system_pods.go:89] "storage-provisioner" [d20f7489-1aa1-44b8-9221-4d1849884be4] Running
	I0612 15:03:50.566145   13752 system_pods.go:126] duration metric: took 11.4229ms to wait for k8s-apps to be running ...
	I0612 15:03:50.566145   13752 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 15:03:50.586056   13752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 15:03:50.605851   13752 system_svc.go:56] duration metric: took 39.7055ms WaitForService to wait for kubelet
	I0612 15:03:50.605851   13752 kubeadm.go:576] duration metric: took 1m14.7841386s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 15:03:50.605851   13752 node_conditions.go:102] verifying NodePressure condition ...
	I0612 15:03:50.613058   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes
	I0612 15:03:50.613139   13752 round_trippers.go:469] Request Headers:
	I0612 15:03:50.613139   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:03:50.613209   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:03:50.613438   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:03:50.613438   13752 round_trippers.go:577] Response Headers:
	I0612 15:03:50.613438   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:03:50.613438   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:03:50 GMT
	I0612 15:03:50.613438   13752 round_trippers.go:580]     Audit-Id: f0433259-994d-465d-87b3-9f02e99a7845
	I0612 15:03:50.618051   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:03:50.618051   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:03:50.618051   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:03:50.618598   13752 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1991"},"items":[{"metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16259 chars]
	I0612 15:03:50.619678   13752 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 15:03:50.619734   13752 node_conditions.go:123] node cpu capacity is 2
	I0612 15:03:50.619734   13752 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 15:03:50.619812   13752 node_conditions.go:123] node cpu capacity is 2
	I0612 15:03:50.619812   13752 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 15:03:50.619812   13752 node_conditions.go:123] node cpu capacity is 2
	I0612 15:03:50.619812   13752 node_conditions.go:105] duration metric: took 13.9615ms to run NodePressure ...
	I0612 15:03:50.619812   13752 start.go:240] waiting for startup goroutines ...
	I0612 15:03:50.619812   13752 start.go:245] waiting for cluster config update ...
	I0612 15:03:50.619886   13752 start.go:254] writing updated cluster config ...
	I0612 15:03:50.624338   13752 out.go:177] 
	I0612 15:03:50.630612   13752 config.go:182] Loaded profile config "ha-957600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 15:03:50.639807   13752 config.go:182] Loaded profile config "multinode-025000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 15:03:50.639807   13752 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\config.json ...
	I0612 15:03:50.648108   13752 out.go:177] * Starting "multinode-025000-m02" worker node in "multinode-025000" cluster
	I0612 15:03:50.648108   13752 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0612 15:03:50.648108   13752 cache.go:56] Caching tarball of preloaded images
	I0612 15:03:50.648108   13752 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0612 15:03:50.648108   13752 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0612 15:03:50.651280   13752 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\config.json ...
	I0612 15:03:50.653529   13752 start.go:360] acquireMachinesLock for multinode-025000-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 15:03:50.653529   13752 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-025000-m02"
	I0612 15:03:50.653529   13752 start.go:96] Skipping create...Using existing machine configuration
	I0612 15:03:50.653529   13752 fix.go:54] fixHost starting: m02
	I0612 15:03:50.654779   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:03:52.788560   13752 main.go:141] libmachine: [stdout =====>] : Off
	
	I0612 15:03:52.790100   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:03:52.790100   13752 fix.go:112] recreateIfNeeded on multinode-025000-m02: state=Stopped err=<nil>
	W0612 15:03:52.790100   13752 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 15:03:52.794283   13752 out.go:177] * Restarting existing hyperv VM for "multinode-025000-m02" ...
	I0612 15:03:52.797091   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-025000-m02
	I0612 15:03:55.756023   13752 main.go:141] libmachine: [stdout =====>] : 
	I0612 15:03:55.757242   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:03:55.757242   13752 main.go:141] libmachine: Waiting for host to start...
	I0612 15:03:55.757242   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:03:57.928593   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:03:57.928593   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:03:57.938810   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:04:00.364669   13752 main.go:141] libmachine: [stdout =====>] : 
	I0612 15:04:00.371455   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:01.383557   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:04:03.477291   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:04:03.477291   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:03.486274   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:04:05.950061   13752 main.go:141] libmachine: [stdout =====>] : 
	I0612 15:04:05.950061   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:06.967347   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:04:09.141470   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:04:09.141470   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:09.141627   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:04:11.617990   13752 main.go:141] libmachine: [stdout =====>] : 
	I0612 15:04:11.617990   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:12.621262   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:04:14.844671   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:04:14.844744   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:14.844810   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:04:17.322154   13752 main.go:141] libmachine: [stdout =====>] : 
	I0612 15:04:17.324672   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:18.334047   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:04:20.542750   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:04:20.542750   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:20.542750   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:04:23.012398   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:04:23.022322   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:23.025123   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:04:25.102306   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:04:25.104777   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:25.104832   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:04:27.713960   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:04:27.713960   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:27.726069   13752 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\config.json ...
	I0612 15:04:27.728861   13752 machine.go:94] provisionDockerMachine start ...
	I0612 15:04:27.728861   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:04:29.923353   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:04:29.923353   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:29.936170   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:04:32.366390   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:04:32.366390   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:32.383850   13752 main.go:141] libmachine: Using SSH client type: native
	I0612 15:04:32.383987   13752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.204.132 22 <nil> <nil>}
	I0612 15:04:32.383987   13752 main.go:141] libmachine: About to run SSH command:
	hostname
	I0612 15:04:32.513468   13752 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0612 15:04:32.513468   13752 buildroot.go:166] provisioning hostname "multinode-025000-m02"
	I0612 15:04:32.513468   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:04:34.586891   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:04:34.593830   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:34.593830   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:04:37.047855   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:04:37.047855   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:37.064835   13752 main.go:141] libmachine: Using SSH client type: native
	I0612 15:04:37.065616   13752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.204.132 22 <nil> <nil>}
	I0612 15:04:37.065616   13752 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-025000-m02 && echo "multinode-025000-m02" | sudo tee /etc/hostname
	I0612 15:04:37.219666   13752 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-025000-m02
	
	I0612 15:04:37.219794   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:04:39.271246   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:04:39.279675   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:39.279675   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:04:41.755052   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:04:41.755052   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:41.770728   13752 main.go:141] libmachine: Using SSH client type: native
	I0612 15:04:41.771339   13752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.204.132 22 <nil> <nil>}
	I0612 15:04:41.771412   13752 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-025000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-025000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-025000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0612 15:04:41.918296   13752 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0612 15:04:41.918296   13752 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0612 15:04:41.918412   13752 buildroot.go:174] setting up certificates
	I0612 15:04:41.918412   13752 provision.go:84] configureAuth start
	I0612 15:04:41.918510   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:04:43.987317   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:04:43.998817   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:43.998817   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:04:46.536778   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:04:46.536778   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:46.536778   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:04:48.576821   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:04:48.576821   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:48.576821   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:04:51.030137   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:04:51.030137   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:51.030137   13752 provision.go:143] copyHostCerts
	I0612 15:04:51.032373   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0612 15:04:51.032827   13752 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0612 15:04:51.032827   13752 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0612 15:04:51.033417   13752 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0612 15:04:51.034697   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0612 15:04:51.035010   13752 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0612 15:04:51.035010   13752 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0612 15:04:51.035269   13752 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0612 15:04:51.035600   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0612 15:04:51.036532   13752 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0612 15:04:51.036532   13752 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0612 15:04:51.036715   13752 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0612 15:04:51.037184   13752 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-025000-m02 san=[127.0.0.1 172.23.204.132 localhost minikube multinode-025000-m02]
	I0612 15:04:51.294999   13752 provision.go:177] copyRemoteCerts
	I0612 15:04:51.316836   13752 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0612 15:04:51.316836   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:04:53.379898   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:04:53.391252   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:53.391252   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:04:55.855747   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:04:55.855747   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:55.867123   13752 sshutil.go:53] new ssh client: &{IP:172.23.204.132 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000-m02\id_rsa Username:docker}
	I0612 15:04:55.964974   13752 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6481227s)
	I0612 15:04:55.965098   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0612 15:04:55.965581   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0612 15:04:56.009622   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0612 15:04:56.010097   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0612 15:04:56.053102   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0612 15:04:56.055574   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0612 15:04:56.102614   13752 provision.go:87] duration metric: took 14.1841548s to configureAuth
	I0612 15:04:56.102734   13752 buildroot.go:189] setting minikube options for container-runtime
	I0612 15:04:56.103379   13752 config.go:182] Loaded profile config "multinode-025000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 15:04:56.103443   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:04:58.138283   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:04:58.149275   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:04:58.149364   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:05:00.573468   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:05:00.573468   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:00.589848   13752 main.go:141] libmachine: Using SSH client type: native
	I0612 15:05:00.590045   13752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.204.132 22 <nil> <nil>}
	I0612 15:05:00.590045   13752 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0612 15:05:00.717121   13752 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0612 15:05:00.717121   13752 buildroot.go:70] root file system type: tmpfs
	I0612 15:05:00.717412   13752 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0612 15:05:00.717412   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:05:02.743742   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:05:02.755872   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:02.755872   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:05:05.160289   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:05:05.160289   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:05.175954   13752 main.go:141] libmachine: Using SSH client type: native
	I0612 15:05:05.176972   13752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.204.132 22 <nil> <nil>}
	I0612 15:05:05.177060   13752 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.23.200.184"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0612 15:05:05.334048   13752 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.23.200.184
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0612 15:05:05.334201   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:05:07.403075   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:05:07.403075   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:07.413954   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:05:09.886807   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:05:09.886807   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:09.904696   13752 main.go:141] libmachine: Using SSH client type: native
	I0612 15:05:09.905232   13752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.204.132 22 <nil> <nil>}
	I0612 15:05:09.905232   13752 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0612 15:05:12.185858   13752 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0612 15:05:12.185858   13752 machine.go:97] duration metric: took 44.4568496s to provisionDockerMachine
	I0612 15:05:12.185858   13752 start.go:293] postStartSetup for "multinode-025000-m02" (driver="hyperv")
	I0612 15:05:12.185858   13752 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0612 15:05:12.196892   13752 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0612 15:05:12.196892   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:05:14.297058   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:05:14.297058   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:14.309152   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:05:16.758401   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:05:16.769950   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:16.770039   13752 sshutil.go:53] new ssh client: &{IP:172.23.204.132 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000-m02\id_rsa Username:docker}
	I0612 15:05:16.883549   13752 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6866412s)
	I0612 15:05:16.894999   13752 ssh_runner.go:195] Run: cat /etc/os-release
	I0612 15:05:16.902602   13752 command_runner.go:130] > NAME=Buildroot
	I0612 15:05:16.902602   13752 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0612 15:05:16.902602   13752 command_runner.go:130] > ID=buildroot
	I0612 15:05:16.902602   13752 command_runner.go:130] > VERSION_ID=2023.02.9
	I0612 15:05:16.902602   13752 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0612 15:05:16.902602   13752 info.go:137] Remote host: Buildroot 2023.02.9
	I0612 15:05:16.902602   13752 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0612 15:05:16.903363   13752 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0612 15:05:16.904464   13752 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> 12802.pem in /etc/ssl/certs
	I0612 15:05:16.904464   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> /etc/ssl/certs/12802.pem
	I0612 15:05:16.915381   13752 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0612 15:05:16.936106   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem --> /etc/ssl/certs/12802.pem (1708 bytes)
	I0612 15:05:16.979307   13752 start.go:296] duration metric: took 4.7934332s for postStartSetup
	I0612 15:05:16.979333   13752 fix.go:56] duration metric: took 1m26.3255193s for fixHost
	I0612 15:05:16.979333   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:05:19.039892   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:05:19.050914   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:19.050914   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:05:21.510031   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:05:21.521336   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:21.528212   13752 main.go:141] libmachine: Using SSH client type: native
	I0612 15:05:21.528778   13752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.204.132 22 <nil> <nil>}
	I0612 15:05:21.528860   13752 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0612 15:05:21.659683   13752 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718229921.655053520
	
	I0612 15:05:21.659683   13752 fix.go:216] guest clock: 1718229921.655053520
	I0612 15:05:21.659683   13752 fix.go:229] Guest: 2024-06-12 15:05:21.65505352 -0700 PDT Remote: 2024-06-12 15:05:16.9793333 -0700 PDT m=+294.041716601 (delta=4.67572022s)
	I0612 15:05:21.659854   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:05:23.744338   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:05:23.757408   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:23.757408   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:05:26.193091   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:05:26.193091   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:26.210278   13752 main.go:141] libmachine: Using SSH client type: native
	I0612 15:05:26.210766   13752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ea540] 0x13ed120 <nil>  [] 0s} 172.23.204.132 22 <nil> <nil>}
	I0612 15:05:26.210766   13752 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1718229921
	I0612 15:05:26.356668   13752 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Jun 12 22:05:21 UTC 2024
	
	I0612 15:05:26.356668   13752 fix.go:236] clock set: Wed Jun 12 22:05:21 UTC 2024
	 (err=<nil>)
	I0612 15:05:26.356668   13752 start.go:83] releasing machines lock for "multinode-025000-m02", held for 1m35.7028233s
	I0612 15:05:26.356668   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:05:28.463909   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:05:28.463909   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:28.475301   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:05:31.107427   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:05:31.107502   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:31.110144   13752 out.go:177] * Found network options:
	I0612 15:05:31.113248   13752 out.go:177]   - NO_PROXY=172.23.200.184
	W0612 15:05:31.115585   13752 proxy.go:119] fail to check proxy env: Error ip not in block
	I0612 15:05:31.117982   13752 out.go:177]   - NO_PROXY=172.23.200.184
	W0612 15:05:31.120848   13752 proxy.go:119] fail to check proxy env: Error ip not in block
	W0612 15:05:31.123156   13752 proxy.go:119] fail to check proxy env: Error ip not in block
	I0612 15:05:31.126385   13752 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0612 15:05:31.126385   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:05:31.137186   13752 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0612 15:05:31.137186   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:05:33.410239   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:05:33.410344   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:33.410344   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:05:33.410344   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:05:33.410344   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:33.410344   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:05:36.072617   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:05:36.085997   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:36.086183   13752 sshutil.go:53] new ssh client: &{IP:172.23.204.132 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000-m02\id_rsa Username:docker}
	I0612 15:05:36.110051   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:05:36.110108   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:36.110108   13752 sshutil.go:53] new ssh client: &{IP:172.23.204.132 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000-m02\id_rsa Username:docker}
	I0612 15:05:36.239650   13752 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0612 15:05:36.239650   13752 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.113248s)
	I0612 15:05:36.239650   13752 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0612 15:05:36.239650   13752 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1024472s)
	W0612 15:05:36.239650   13752 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0612 15:05:36.250503   13752 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0612 15:05:36.294696   13752 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0612 15:05:36.294696   13752 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0612 15:05:36.294696   13752 start.go:494] detecting cgroup driver to use...
	I0612 15:05:36.294696   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 15:05:36.331076   13752 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0612 15:05:36.343247   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0612 15:05:36.379586   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0612 15:05:36.403323   13752 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0612 15:05:36.414297   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0612 15:05:36.447987   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0612 15:05:36.481550   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0612 15:05:36.512594   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0612 15:05:36.550090   13752 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0612 15:05:36.586911   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0612 15:05:36.617435   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0612 15:05:36.649684   13752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0612 15:05:36.686014   13752 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0612 15:05:36.706594   13752 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0612 15:05:36.718254   13752 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0612 15:05:36.747992   13752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 15:05:36.940371   13752 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0612 15:05:36.974252   13752 start.go:494] detecting cgroup driver to use...
	I0612 15:05:36.986707   13752 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0612 15:05:37.012971   13752 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0612 15:05:37.012971   13752 command_runner.go:130] > [Unit]
	I0612 15:05:37.013084   13752 command_runner.go:130] > Description=Docker Application Container Engine
	I0612 15:05:37.013084   13752 command_runner.go:130] > Documentation=https://docs.docker.com
	I0612 15:05:37.013152   13752 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0612 15:05:37.013152   13752 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0612 15:05:37.013220   13752 command_runner.go:130] > StartLimitBurst=3
	I0612 15:05:37.013274   13752 command_runner.go:130] > StartLimitIntervalSec=60
	I0612 15:05:37.013274   13752 command_runner.go:130] > [Service]
	I0612 15:05:37.013274   13752 command_runner.go:130] > Type=notify
	I0612 15:05:37.013314   13752 command_runner.go:130] > Restart=on-failure
	I0612 15:05:37.013350   13752 command_runner.go:130] > Environment=NO_PROXY=172.23.200.184
	I0612 15:05:37.013350   13752 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0612 15:05:37.013388   13752 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0612 15:05:37.013424   13752 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0612 15:05:37.013478   13752 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0612 15:05:37.013478   13752 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0612 15:05:37.013512   13752 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0612 15:05:37.013549   13752 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0612 15:05:37.013549   13752 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0612 15:05:37.013549   13752 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0612 15:05:37.013549   13752 command_runner.go:130] > ExecStart=
	I0612 15:05:37.013549   13752 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0612 15:05:37.013549   13752 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0612 15:05:37.013695   13752 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0612 15:05:37.013695   13752 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0612 15:05:37.013731   13752 command_runner.go:130] > LimitNOFILE=infinity
	I0612 15:05:37.013731   13752 command_runner.go:130] > LimitNPROC=infinity
	I0612 15:05:37.013731   13752 command_runner.go:130] > LimitCORE=infinity
	I0612 15:05:37.013731   13752 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0612 15:05:37.013731   13752 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0612 15:05:37.013731   13752 command_runner.go:130] > TasksMax=infinity
	I0612 15:05:37.013731   13752 command_runner.go:130] > TimeoutStartSec=0
	I0612 15:05:37.013731   13752 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0612 15:05:37.013731   13752 command_runner.go:130] > Delegate=yes
	I0612 15:05:37.013731   13752 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0612 15:05:37.013731   13752 command_runner.go:130] > KillMode=process
	I0612 15:05:37.013731   13752 command_runner.go:130] > [Install]
	I0612 15:05:37.013731   13752 command_runner.go:130] > WantedBy=multi-user.target
	I0612 15:05:37.025273   13752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 15:05:37.059852   13752 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0612 15:05:37.100371   13752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0612 15:05:37.138018   13752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0612 15:05:37.175943   13752 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0612 15:05:37.243461   13752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0612 15:05:37.268953   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0612 15:05:37.302431   13752 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0612 15:05:37.316754   13752 ssh_runner.go:195] Run: which cri-dockerd
	I0612 15:05:37.320876   13752 command_runner.go:130] > /usr/bin/cri-dockerd
	I0612 15:05:37.338790   13752 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0612 15:05:37.358517   13752 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0612 15:05:37.403563   13752 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0612 15:05:37.590876   13752 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0612 15:05:37.776758   13752 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0612 15:05:37.777034   13752 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0612 15:05:37.823278   13752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 15:05:38.031246   13752 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0612 15:05:40.647681   13752 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6163867s)
	I0612 15:05:40.659388   13752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0612 15:05:40.700003   13752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0612 15:05:40.738723   13752 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0612 15:05:40.945882   13752 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0612 15:05:41.136425   13752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 15:05:41.327495   13752 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0612 15:05:41.372148   13752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0612 15:05:41.414469   13752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 15:05:41.603576   13752 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0612 15:05:41.712870   13752 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0612 15:05:41.726379   13752 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0612 15:05:41.729851   13752 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0612 15:05:41.729851   13752 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0612 15:05:41.729851   13752 command_runner.go:130] > Device: 0,22	Inode: 846         Links: 1
	I0612 15:05:41.729851   13752 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0612 15:05:41.729851   13752 command_runner.go:130] > Access: 2024-06-12 22:05:41.624113268 +0000
	I0612 15:05:41.729851   13752 command_runner.go:130] > Modify: 2024-06-12 22:05:41.624113268 +0000
	I0612 15:05:41.729851   13752 command_runner.go:130] > Change: 2024-06-12 22:05:41.629113300 +0000
	I0612 15:05:41.729851   13752 command_runner.go:130] >  Birth: -
	I0612 15:05:41.729851   13752 start.go:562] Will wait 60s for crictl version
	I0612 15:05:41.755436   13752 ssh_runner.go:195] Run: which crictl
	I0612 15:05:41.761991   13752 command_runner.go:130] > /usr/bin/crictl
	I0612 15:05:41.774612   13752 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0612 15:05:41.833501   13752 command_runner.go:130] > Version:  0.1.0
	I0612 15:05:41.833501   13752 command_runner.go:130] > RuntimeName:  docker
	I0612 15:05:41.833501   13752 command_runner.go:130] > RuntimeVersion:  26.1.4
	I0612 15:05:41.833501   13752 command_runner.go:130] > RuntimeApiVersion:  v1
	I0612 15:05:41.833501   13752 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0612 15:05:41.847894   13752 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0612 15:05:41.873725   13752 command_runner.go:130] > 26.1.4
	I0612 15:05:41.896589   13752 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0612 15:05:41.932241   13752 command_runner.go:130] > 26.1.4
	I0612 15:05:41.937251   13752 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0612 15:05:41.939322   13752 out.go:177]   - env NO_PROXY=172.23.200.184
	I0612 15:05:41.942089   13752 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0612 15:05:41.946227   13752 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0612 15:05:41.946227   13752 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0612 15:05:41.946227   13752 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0612 15:05:41.946227   13752 ip.go:207] Found interface: {Index:16 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:56:a0:18 Flags:up|broadcast|multicast|running}
	I0612 15:05:41.949205   13752 ip.go:210] interface addr: fe80::52c5:dd8:dd1e:a400/64
	I0612 15:05:41.949205   13752 ip.go:210] interface addr: 172.23.192.1/20
	I0612 15:05:41.962502   13752 ssh_runner.go:195] Run: grep 172.23.192.1	host.minikube.internal$ /etc/hosts
	I0612 15:05:41.969716   13752 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.23.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 15:05:41.992291   13752 mustload.go:65] Loading cluster: multinode-025000
	I0612 15:05:41.993104   13752 config.go:182] Loaded profile config "multinode-025000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 15:05:41.993342   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:05:44.195800   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:05:44.207946   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:44.207946   13752 host.go:66] Checking if "multinode-025000" exists ...
	I0612 15:05:44.209193   13752 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000 for IP: 172.23.204.132
	I0612 15:05:44.209193   13752 certs.go:194] generating shared ca certs ...
	I0612 15:05:44.209331   13752 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 15:05:44.209694   13752 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0612 15:05:44.210508   13752 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0612 15:05:44.210628   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0612 15:05:44.210628   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0612 15:05:44.210628   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0612 15:05:44.211162   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0612 15:05:44.211756   13752 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem (1338 bytes)
	W0612 15:05:44.212064   13752 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280_empty.pem, impossibly tiny 0 bytes
	I0612 15:05:44.212141   13752 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0612 15:05:44.212371   13752 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0612 15:05:44.212601   13752 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0612 15:05:44.212838   13752 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0612 15:05:44.213654   13752 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem (1708 bytes)
	I0612 15:05:44.213875   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0612 15:05:44.214079   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem -> /usr/share/ca-certificates/1280.pem
	I0612 15:05:44.214278   13752 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem -> /usr/share/ca-certificates/12802.pem
	I0612 15:05:44.214525   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0612 15:05:44.265097   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0612 15:05:44.315141   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0612 15:05:44.361754   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0612 15:05:44.411644   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0612 15:05:44.459910   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1280.pem --> /usr/share/ca-certificates/1280.pem (1338 bytes)
	I0612 15:05:44.506569   13752 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\12802.pem --> /usr/share/ca-certificates/12802.pem (1708 bytes)
	I0612 15:05:44.564759   13752 ssh_runner.go:195] Run: openssl version
	I0612 15:05:44.573797   13752 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0612 15:05:44.585415   13752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1280.pem && ln -fs /usr/share/ca-certificates/1280.pem /etc/ssl/certs/1280.pem"
	I0612 15:05:44.620988   13752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1280.pem
	I0612 15:05:44.627331   13752 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 12 20:15 /usr/share/ca-certificates/1280.pem
	I0612 15:05:44.628837   13752 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 12 20:15 /usr/share/ca-certificates/1280.pem
	I0612 15:05:44.644759   13752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1280.pem
	I0612 15:05:44.647443   13752 command_runner.go:130] > 51391683
	I0612 15:05:44.667423   13752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1280.pem /etc/ssl/certs/51391683.0"
	I0612 15:05:44.704038   13752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12802.pem && ln -fs /usr/share/ca-certificates/12802.pem /etc/ssl/certs/12802.pem"
	I0612 15:05:44.739020   13752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12802.pem
	I0612 15:05:44.746867   13752 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 12 20:15 /usr/share/ca-certificates/12802.pem
	I0612 15:05:44.746867   13752 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 12 20:15 /usr/share/ca-certificates/12802.pem
	I0612 15:05:44.757883   13752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12802.pem
	I0612 15:05:44.769373   13752 command_runner.go:130] > 3ec20f2e
	I0612 15:05:44.782071   13752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12802.pem /etc/ssl/certs/3ec20f2e.0"
	I0612 15:05:44.814865   13752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0612 15:05:44.847078   13752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0612 15:05:44.855375   13752 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 12 20:00 /usr/share/ca-certificates/minikubeCA.pem
	I0612 15:05:44.855521   13752 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 12 20:00 /usr/share/ca-certificates/minikubeCA.pem
	I0612 15:05:44.865355   13752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0612 15:05:44.875520   13752 command_runner.go:130] > b5213941
	I0612 15:05:44.887608   13752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0612 15:05:44.920861   13752 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0612 15:05:44.929372   13752 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0612 15:05:44.929447   13752 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0612 15:05:44.929694   13752 kubeadm.go:928] updating node {m02 172.23.204.132 8443 v1.30.1 docker false true} ...
	I0612 15:05:44.929938   13752 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-025000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.23.204.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-025000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0612 15:05:44.943003   13752 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0612 15:05:44.963614   13752 command_runner.go:130] > kubeadm
	I0612 15:05:44.963614   13752 command_runner.go:130] > kubectl
	I0612 15:05:44.963614   13752 command_runner.go:130] > kubelet
	I0612 15:05:44.963805   13752 binaries.go:44] Found k8s binaries, skipping transfer
	I0612 15:05:44.974929   13752 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0612 15:05:44.998453   13752 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0612 15:05:45.031017   13752 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0612 15:05:45.079311   13752 ssh_runner.go:195] Run: grep 172.23.200.184	control-plane.minikube.internal$ /etc/hosts
	I0612 15:05:45.089998   13752 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.23.200.184	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0612 15:05:45.124571   13752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 15:05:45.325168   13752 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 15:05:45.352400   13752 host.go:66] Checking if "multinode-025000" exists ...
	I0612 15:05:45.353142   13752 start.go:316] joinCluster: &{Name:multinode-025000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-025000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.23.200.184 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.23.204.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.23.206.72 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 15:05:45.353674   13752 start.go:329] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.23.204.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0612 15:05:45.353674   13752 host.go:66] Checking if "multinode-025000-m02" exists ...
	I0612 15:05:45.354009   13752 mustload.go:65] Loading cluster: multinode-025000
	I0612 15:05:45.354772   13752 config.go:182] Loaded profile config "multinode-025000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 15:05:45.355604   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:05:47.576539   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:05:47.581332   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:47.581537   13752 host.go:66] Checking if "multinode-025000" exists ...
	I0612 15:05:47.582151   13752 api_server.go:166] Checking apiserver status ...
	I0612 15:05:47.594257   13752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 15:05:47.594257   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:05:49.805232   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:05:49.805232   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:49.817884   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:05:52.427895   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:05:52.427990   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:52.428183   13752 sshutil.go:53] new ssh client: &{IP:172.23.200.184 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000\id_rsa Username:docker}
	I0612 15:05:52.544068   13752 command_runner.go:130] > 1830
	I0612 15:05:52.544339   13752 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.9500288s)
	I0612 15:05:52.557130   13752 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1830/cgroup
	W0612 15:05:52.577706   13752 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1830/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0612 15:05:52.590719   13752 ssh_runner.go:195] Run: ls
	I0612 15:05:52.600003   13752 api_server.go:253] Checking apiserver healthz at https://172.23.200.184:8443/healthz ...
	I0612 15:05:52.608430   13752 api_server.go:279] https://172.23.200.184:8443/healthz returned 200:
	ok
	I0612 15:05:52.622028   13752 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl drain multinode-025000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0612 15:05:52.787849   13752 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-v4cqk, kube-system/kube-proxy-tdcdp
	I0612 15:05:55.828219   13752 command_runner.go:130] > node/multinode-025000-m02 cordoned
	I0612 15:05:55.828300   13752 command_runner.go:130] > pod "busybox-fc5497c4f-9bsls" has DeletionTimestamp older than 1 seconds, skipping
	I0612 15:05:55.828300   13752 command_runner.go:130] > node/multinode-025000-m02 drained
	I0612 15:05:55.828300   13752 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl drain multinode-025000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.2062611s)
	I0612 15:05:55.828300   13752 node.go:128] successfully drained node "multinode-025000-m02"
	I0612 15:05:55.828508   13752 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0612 15:05:55.828770   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 15:05:58.012780   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:05:58.024308   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:05:58.024468   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 15:06:00.552515   13752 main.go:141] libmachine: [stdout =====>] : 172.23.204.132
	
	I0612 15:06:00.552515   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:06:00.552515   13752 sshutil.go:53] new ssh client: &{IP:172.23.204.132 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000-m02\id_rsa Username:docker}
	I0612 15:06:01.052634   13752 command_runner.go:130] ! W0612 22:06:01.049967    1543 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0612 15:06:01.574423   13752 command_runner.go:130] ! W0612 22:06:01.569930    1543 cleanupnode.go:106] [reset] Failed to remove containers: failed to stop running pod 8dc88eb906f301af25ee91c757ea86831a611d0c2cbd9c6fc85b258149fa4c16: output: E0612 22:06:01.264106    1582 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-9bsls_default\" network: cni config uninitialized" podSandboxID="8dc88eb906f301af25ee91c757ea86831a611d0c2cbd9c6fc85b258149fa4c16"
	I0612 15:06:01.574508   13752 command_runner.go:130] ! time="2024-06-12T22:06:01Z" level=fatal msg="stopping the pod sandbox \"8dc88eb906f301af25ee91c757ea86831a611d0c2cbd9c6fc85b258149fa4c16\": rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-9bsls_default\" network: cni config uninitialized"
	I0612 15:06:01.574508   13752 command_runner.go:130] ! : exit status 1
	I0612 15:06:01.603350   13752 command_runner.go:130] > [preflight] Running pre-flight checks
	I0612 15:06:01.603350   13752 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0612 15:06:01.603350   13752 command_runner.go:130] > [reset] Stopping the kubelet service
	I0612 15:06:01.603350   13752 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0612 15:06:01.603350   13752 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0612 15:06:01.603350   13752 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0612 15:06:01.603350   13752 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0612 15:06:01.603350   13752 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0612 15:06:01.603350   13752 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0612 15:06:01.603350   13752 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0612 15:06:01.603350   13752 command_runner.go:130] > to reset your system's IPVS tables.
	I0612 15:06:01.603350   13752 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0612 15:06:01.603350   13752 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0612 15:06:01.603350   13752 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (5.7748225s)
	I0612 15:06:01.603350   13752 node.go:155] successfully reset node "multinode-025000-m02"
	I0612 15:06:01.604758   13752 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 15:06:01.605339   13752 kapi.go:59] client config for multinode-025000: &rest.Config{Host:"https://172.23.200.184:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-025000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-025000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x288e1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0612 15:06:01.606692   13752 cert_rotation.go:137] Starting client certificate rotation controller
	I0612 15:06:01.606878   13752 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0612 15:06:01.606878   13752 round_trippers.go:463] DELETE https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:01.606878   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:01.606878   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:01.606878   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:01.606878   13752 round_trippers.go:473]     Content-Type: application/json
	I0612 15:06:01.627326   13752 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0612 15:06:01.627326   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:01.627326   13752 round_trippers.go:580]     Content-Length: 171
	I0612 15:06:01.627326   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:01 GMT
	I0612 15:06:01.627326   13752 round_trippers.go:580]     Audit-Id: 01208d5e-ac15-40e6-b821-ffabd585b7a7
	I0612 15:06:01.633211   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:01.633211   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:01.633211   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:01.633211   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:01.633211   13752 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-025000-m02","kind":"nodes","uid":"795a4638-bf70-440d-a6a1-2f194ade7384"}}
	I0612 15:06:01.633211   13752 node.go:180] successfully deleted node "multinode-025000-m02"
	I0612 15:06:01.633211   13752 start.go:333] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.23.204.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0612 15:06:01.633211   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0612 15:06:01.633211   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 15:06:03.689503   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:06:03.689503   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:06:03.700060   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 15:06:06.211691   13752 main.go:141] libmachine: [stdout =====>] : 172.23.200.184
	
	I0612 15:06:06.222483   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:06:06.222858   13752 sshutil.go:53] new ssh client: &{IP:172.23.200.184 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000\id_rsa Username:docker}
	I0612 15:06:06.407307   13752 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 7r4gb6.bv8tbrmt47yqfsdc --discovery-token-ca-cert-hash sha256:10c04e0412ada9d72a46398cbb6ecb6de5efcad2d747fb615b7e984406c55dc5 
	I0612 15:06:06.407307   13752 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.7740803s)
	I0612 15:06:06.407307   13752 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.23.204.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0612 15:06:06.407307   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7r4gb6.bv8tbrmt47yqfsdc --discovery-token-ca-cert-hash sha256:10c04e0412ada9d72a46398cbb6ecb6de5efcad2d747fb615b7e984406c55dc5 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-025000-m02"
	I0612 15:06:06.620598   13752 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0612 15:06:07.480579   13752 command_runner.go:130] > [preflight] Running pre-flight checks
	I0612 15:06:07.480579   13752 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0612 15:06:07.480579   13752 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0612 15:06:07.480579   13752 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0612 15:06:07.480579   13752 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0612 15:06:07.480579   13752 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0612 15:06:07.480579   13752 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0612 15:06:07.480579   13752 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 503.119418ms
	I0612 15:06:07.480579   13752 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0612 15:06:07.480579   13752 command_runner.go:130] > This node has joined the cluster:
	I0612 15:06:07.480579   13752 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0612 15:06:07.480579   13752 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0612 15:06:07.480579   13752 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0612 15:06:07.480579   13752 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7r4gb6.bv8tbrmt47yqfsdc --discovery-token-ca-cert-hash sha256:10c04e0412ada9d72a46398cbb6ecb6de5efcad2d747fb615b7e984406c55dc5 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-025000-m02": (1.0732687s)
	I0612 15:06:07.480579   13752 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0612 15:06:07.686699   13752 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0612 15:06:07.886287   13752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-025000-m02 minikube.k8s.io/updated_at=2024_06_12T15_06_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97 minikube.k8s.io/name=multinode-025000 minikube.k8s.io/primary=false
	I0612 15:06:08.003256   13752 command_runner.go:130] > node/multinode-025000-m02 labeled
	I0612 15:06:08.003256   13752 start.go:318] duration metric: took 22.6500388s to joinCluster
	I0612 15:06:08.003256   13752 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.23.204.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0612 15:06:08.008504   13752 out.go:177] * Verifying Kubernetes components...
	I0612 15:06:08.004258   13752 config.go:182] Loaded profile config "multinode-025000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 15:06:08.025586   13752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0612 15:06:08.209009   13752 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0612 15:06:08.237210   13752 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 15:06:08.238198   13752 kapi.go:59] client config for multinode-025000: &rest.Config{Host:"https://172.23.200.184:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-025000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-025000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x288e1e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0612 15:06:08.238963   13752 node_ready.go:35] waiting up to 6m0s for node "multinode-025000-m02" to be "Ready" ...
	I0612 15:06:08.239211   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:08.239211   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:08.239211   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:08.239211   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:08.242936   13752 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0612 15:06:08.242936   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:08.242936   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:08.242936   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:08.243016   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:08.243016   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:08 GMT
	I0612 15:06:08.243016   13752 round_trippers.go:580]     Audit-Id: bbd3425f-3783-42d8-b83f-5a530af99375
	I0612 15:06:08.243016   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:08.243129   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2138","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3789 chars]
	I0612 15:06:08.744007   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:08.744109   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:08.744109   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:08.744109   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:08.747683   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:06:08.747683   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:08.747763   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:08.747763   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:08 GMT
	I0612 15:06:08.747763   13752 round_trippers.go:580]     Audit-Id: 13655efc-c1c1-4ce3-9eac-036dc7d24263
	I0612 15:06:08.747763   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:08.747763   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:08.747763   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:08.747763   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2138","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3789 chars]
	I0612 15:06:09.243796   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:09.243796   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:09.243796   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:09.243796   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:09.246762   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:06:09.246762   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:09.246762   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:09 GMT
	I0612 15:06:09.246762   13752 round_trippers.go:580]     Audit-Id: e8569864-a7fd-4431-970b-65ecf62cc822
	I0612 15:06:09.246762   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:09.246762   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:09.246762   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:09.246762   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:09.249446   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2138","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3789 chars]
	I0612 15:06:09.749026   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:09.749120   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:09.749120   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:09.749207   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:09.752573   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:06:09.752573   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:09.752573   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:09.752573   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:09.752573   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:09 GMT
	I0612 15:06:09.752573   13752 round_trippers.go:580]     Audit-Id: 1e9c4978-e2ec-46e7-92d3-89c5fd10acef
	I0612 15:06:09.752573   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:09.752573   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:09.753557   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2145","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3898 chars]
	I0612 15:06:10.253638   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:10.253638   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:10.253638   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:10.253638   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:10.258536   13752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 15:06:10.258536   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:10.258536   13752 round_trippers.go:580]     Audit-Id: 7ea82fb2-f4eb-4425-a221-6c96965459d5
	I0612 15:06:10.258536   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:10.258536   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:10.258536   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:10.258536   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:10.258536   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:10 GMT
	I0612 15:06:10.259070   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2145","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3898 chars]
	I0612 15:06:10.259618   13752 node_ready.go:53] node "multinode-025000-m02" has status "Ready":"False"
	I0612 15:06:10.742905   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:10.743016   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:10.743016   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:10.743081   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:10.751393   13752 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0612 15:06:10.751393   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:10.751393   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:10.751393   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:10.751393   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:10.751393   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:10.751393   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:10 GMT
	I0612 15:06:10.751393   13752 round_trippers.go:580]     Audit-Id: fb12fb92-82c7-4844-9bec-41394cdc0850
	I0612 15:06:10.751393   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2145","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3898 chars]
	I0612 15:06:11.250847   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:11.250847   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:11.250847   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:11.250932   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:11.254567   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:06:11.254567   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:11.254884   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:11 GMT
	I0612 15:06:11.254884   13752 round_trippers.go:580]     Audit-Id: d8b9dfa5-ef19-451c-88a4-eaa087b7c3df
	I0612 15:06:11.254884   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:11.254884   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:11.254884   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:11.254884   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:11.255084   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2145","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3898 chars]
	I0612 15:06:11.752245   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:11.752329   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:11.752329   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:11.752329   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:11.756218   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:06:11.756218   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:11.756218   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:11 GMT
	I0612 15:06:11.756218   13752 round_trippers.go:580]     Audit-Id: a8db07f8-e087-4ee3-8b2e-f35286b68800
	I0612 15:06:11.756218   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:11.756218   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:11.756218   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:11.756218   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:11.756218   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2145","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3898 chars]
	I0612 15:06:12.252769   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:12.252769   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:12.252769   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:12.252769   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:12.260792   13752 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0612 15:06:12.261263   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:12.261263   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:12.261263   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:12.261263   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:12 GMT
	I0612 15:06:12.261263   13752 round_trippers.go:580]     Audit-Id: 651b1b22-a56d-4674-a288-4181fe50dfe9
	I0612 15:06:12.261263   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:12.261263   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:12.261263   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2145","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3898 chars]
	I0612 15:06:12.261944   13752 node_ready.go:53] node "multinode-025000-m02" has status "Ready":"False"
	I0612 15:06:12.739972   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:12.740161   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:12.740161   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:12.740161   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:12.743507   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:06:12.743507   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:12.743507   13752 round_trippers.go:580]     Audit-Id: ddb2b769-7566-4938-a6ba-3292e436dfef
	I0612 15:06:12.744305   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:12.744305   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:12.744305   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:12.744305   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:12.744305   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:12 GMT
	I0612 15:06:12.744735   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2145","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3898 chars]
	I0612 15:06:13.241569   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:13.241569   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:13.241569   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:13.241569   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:13.246203   13752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 15:06:13.246203   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:13.246203   13752 round_trippers.go:580]     Audit-Id: 281d352c-d865-4aae-b4f6-0b27e69d52f9
	I0612 15:06:13.246289   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:13.246289   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:13.246289   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:13.246289   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:13.246289   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:13 GMT
	I0612 15:06:13.246549   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2145","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3898 chars]
	I0612 15:06:13.740410   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:13.740500   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:13.740500   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:13.740500   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:13.744955   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:06:13.744991   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:13.744991   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:13.744991   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:13 GMT
	I0612 15:06:13.745096   13752 round_trippers.go:580]     Audit-Id: 17e3d26b-b10e-4205-bbb5-412836aeb7b4
	I0612 15:06:13.745096   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:13.745096   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:13.745096   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:13.745505   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2145","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3898 chars]
	I0612 15:06:14.240601   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:14.240682   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:14.240682   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:14.240682   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:14.244546   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:06:14.244546   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:14.244546   13752 round_trippers.go:580]     Audit-Id: f09abef8-5fd6-4eb5-98d9-7f2d3987642a
	I0612 15:06:14.244546   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:14.244891   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:14.244891   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:14.244891   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:14.244891   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:14 GMT
	I0612 15:06:14.245072   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2145","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3898 chars]
	I0612 15:06:14.739629   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:14.739751   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:14.739751   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:14.739829   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:14.749271   13752 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0612 15:06:14.749271   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:14.749271   13752 round_trippers.go:580]     Audit-Id: 8ce7d0db-ea76-4501-a939-8fe4f2a9ae78
	I0612 15:06:14.749271   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:14.749271   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:14.749271   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:14.749271   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:14.749271   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:14 GMT
	I0612 15:06:14.749271   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2145","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3898 chars]
	I0612 15:06:14.750327   13752 node_ready.go:53] node "multinode-025000-m02" has status "Ready":"False"
	I0612 15:06:15.250657   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:15.250721   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:15.250721   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:15.250721   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:15.260447   13752 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0612 15:06:15.260578   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:15.260578   13752 round_trippers.go:580]     Audit-Id: c88c2121-1272-4cb3-acc3-1244083e9b7f
	I0612 15:06:15.260578   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:15.260651   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:15.260651   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:15.260651   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:15.260651   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:15 GMT
	I0612 15:06:15.260817   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2145","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3898 chars]
	I0612 15:06:15.752170   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:15.752170   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:15.752170   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:15.752170   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:15.757282   13752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 15:06:15.757282   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:15.757282   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:15.757282   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:15.757282   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:15.757282   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:15.757282   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:15 GMT
	I0612 15:06:15.757282   13752 round_trippers.go:580]     Audit-Id: 5b05b8d7-25a2-4d8b-9fae-67f3de76fad9
	I0612 15:06:15.757282   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2145","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3898 chars]
	I0612 15:06:16.253366   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:16.253366   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:16.253430   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:16.253430   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:16.257258   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:06:16.257698   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:16.257698   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:16.257698   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:16.257698   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:16.257698   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:16 GMT
	I0612 15:06:16.257698   13752 round_trippers.go:580]     Audit-Id: d50e984d-adb2-4e9e-a2f4-01492d664abb
	I0612 15:06:16.257698   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:16.259469   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2145","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3898 chars]
	I0612 15:06:16.752694   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:16.752694   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:16.752958   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:16.752958   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:16.760419   13752 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0612 15:06:16.760593   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:16.760593   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:16.760593   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:16.760593   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:16.760593   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:16.760593   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:16 GMT
	I0612 15:06:16.760593   13752 round_trippers.go:580]     Audit-Id: 69d01f06-fa40-4561-83bb-edad1ac5973b
	I0612 15:06:16.762380   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2145","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3898 chars]
	I0612 15:06:16.762380   13752 node_ready.go:53] node "multinode-025000-m02" has status "Ready":"False"
	I0612 15:06:17.252887   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:17.252887   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:17.252887   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:17.252887   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:17.256727   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:06:17.257435   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:17.257435   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:17.257435   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:17.257435   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:17.257435   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:17 GMT
	I0612 15:06:17.257435   13752 round_trippers.go:580]     Audit-Id: 63b43051-bfa9-4123-898d-52465adc9144
	I0612 15:06:17.257435   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:17.257435   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2170","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3933 chars]
	I0612 15:06:17.258204   13752 node_ready.go:49] node "multinode-025000-m02" has status "Ready":"True"
	I0612 15:06:17.258204   13752 node_ready.go:38] duration metric: took 9.0191279s for node "multinode-025000-m02" to be "Ready" ...
	I0612 15:06:17.258204   13752 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 15:06:17.258204   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods
	I0612 15:06:17.258731   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:17.258731   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:17.258852   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:17.263985   13752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 15:06:17.263985   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:17.263985   13752 round_trippers.go:580]     Audit-Id: 6502e7b7-93d0-43dc-bd9c-a7595ad1e5d9
	I0612 15:06:17.263985   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:17.263985   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:17.263985   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:17.263985   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:17.264385   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:17 GMT
	I0612 15:06:17.267291   13752 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2173"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1975","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86160 chars]
	I0612 15:06:17.270940   13752 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace to be "Ready" ...
	I0612 15:06:17.270940   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vgcxw
	I0612 15:06:17.270940   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:17.270940   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:17.270940   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:17.273803   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:06:17.273803   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:17.273803   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:17.273803   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:17.273803   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:17.273803   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:17.273803   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:17 GMT
	I0612 15:06:17.273803   13752 round_trippers.go:580]     Audit-Id: 49dc6815-8a94-46cf-b4c9-0dc14ef5fcf4
	I0612 15:06:17.274879   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-vgcxw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c5bd143a-d39e-46af-9308-0a97bb45729c","resourceVersion":"1975","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"bf85556f-50fc-4d15-8980-72e285cbe89f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf85556f-50fc-4d15-8980-72e285cbe89f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6790 chars]
	I0612 15:06:17.275167   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:06:17.275167   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:17.275167   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:17.275167   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:17.277727   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:06:17.278356   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:17.278356   13752 round_trippers.go:580]     Audit-Id: da26319a-0c45-4169-99f2-eb7328d58e3f
	I0612 15:06:17.278356   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:17.278356   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:17.278356   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:17.278356   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:17.278356   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:17 GMT
	I0612 15:06:17.278732   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:06:17.279146   13752 pod_ready.go:92] pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace has status "Ready":"True"
	I0612 15:06:17.279146   13752 pod_ready.go:81] duration metric: took 8.2062ms for pod "coredns-7db6d8ff4d-vgcxw" in "kube-system" namespace to be "Ready" ...
	I0612 15:06:17.279233   13752 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:06:17.279302   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-025000
	I0612 15:06:17.279302   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:17.279338   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:17.279338   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:17.282208   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:06:17.282208   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:17.282208   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:17.282208   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:17.282208   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:17.282208   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:17.282208   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:17 GMT
	I0612 15:06:17.282208   13752 round_trippers.go:580]     Audit-Id: 9543d149-af3f-4949-8b18-05de62295166
	I0612 15:06:17.282613   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-025000","namespace":"kube-system","uid":"be41c4a6-88ce-4e08-9b7c-16c0b4f3eec2","resourceVersion":"1875","creationTimestamp":"2024-06-12T22:02:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.23.200.184:2379","kubernetes.io/config.hash":"7b6b5637642f3d915c0db1461c7074e6","kubernetes.io/config.mirror":"7b6b5637642f3d915c0db1461c7074e6","kubernetes.io/config.seen":"2024-06-12T22:02:25.563300686Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6171 chars]
	I0612 15:06:17.283360   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:06:17.283360   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:17.283360   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:17.283511   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:17.285759   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:06:17.285980   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:17.285980   13752 round_trippers.go:580]     Audit-Id: 9ec89d8c-474d-4de4-8eb8-91ef510d22cc
	I0612 15:06:17.286039   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:17.286039   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:17.286039   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:17.286039   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:17.286039   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:17 GMT
	I0612 15:06:17.286424   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:06:17.286519   13752 pod_ready.go:92] pod "etcd-multinode-025000" in "kube-system" namespace has status "Ready":"True"
	I0612 15:06:17.286519   13752 pod_ready.go:81] duration metric: took 7.2857ms for pod "etcd-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:06:17.286519   13752 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:06:17.286519   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-025000
	I0612 15:06:17.286519   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:17.286519   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:17.286519   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:17.289721   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:06:17.289824   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:17.289824   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:17.289907   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:17 GMT
	I0612 15:06:17.289963   13752 round_trippers.go:580]     Audit-Id: d2ea6228-ca21-4202-8144-e0b618f9b6c5
	I0612 15:06:17.289963   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:17.289963   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:17.289963   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:17.290031   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-025000","namespace":"kube-system","uid":"63e55411-d432-4e5a-becc-fae0887fecae","resourceVersion":"1897","creationTimestamp":"2024-06-12T22:02:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.23.200.184:8443","kubernetes.io/config.hash":"d6071cd4356268889f798790dc93ce06","kubernetes.io/config.mirror":"d6071cd4356268889f798790dc93ce06","kubernetes.io/config.seen":"2024-06-12T22:02:25.478872091Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:02:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7705 chars]
	I0612 15:06:17.290984   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:06:17.290984   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:17.290984   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:17.290984   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:17.293395   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:06:17.293395   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:17.293774   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:17.293774   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:17.293774   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:17.293774   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:17.293868   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:17 GMT
	I0612 15:06:17.294008   13752 round_trippers.go:580]     Audit-Id: a94194a8-2c21-4b96-bb21-b96fe8d08ee1
	I0612 15:06:17.294323   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:06:17.294909   13752 pod_ready.go:92] pod "kube-apiserver-multinode-025000" in "kube-system" namespace has status "Ready":"True"
	I0612 15:06:17.294996   13752 pod_ready.go:81] duration metric: took 8.4766ms for pod "kube-apiserver-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:06:17.294996   13752 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:06:17.295194   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-025000
	I0612 15:06:17.295194   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:17.295194   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:17.295194   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:17.300492   13752 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0612 15:06:17.300492   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:17.300492   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:17 GMT
	I0612 15:06:17.300492   13752 round_trippers.go:580]     Audit-Id: a16f2ac3-462d-4032-b8f8-a3b0abf05ad5
	I0612 15:06:17.300492   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:17.300492   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:17.300590   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:17.300590   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:17.300889   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-025000","namespace":"kube-system","uid":"68c9aa4f-49ee-439c-ad51-7943e65c0085","resourceVersion":"1895","creationTimestamp":"2024-06-12T21:39:30Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"88de11d8b1aaec126153d44e87c4b5dd","kubernetes.io/config.mirror":"88de11d8b1aaec126153d44e87c4b5dd","kubernetes.io/config.seen":"2024-06-12T21:39:23.999674614Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0612 15:06:17.301558   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:06:17.301656   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:17.301656   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:17.301656   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:17.304419   13752 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0612 15:06:17.304419   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:17.304419   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:17.304419   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:17 GMT
	I0612 15:06:17.304419   13752 round_trippers.go:580]     Audit-Id: d6ee5714-1c76-43bf-b1b2-3888afac52de
	I0612 15:06:17.304419   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:17.304419   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:17.304419   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:17.304419   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:06:17.304419   13752 pod_ready.go:92] pod "kube-controller-manager-multinode-025000" in "kube-system" namespace has status "Ready":"True"
	I0612 15:06:17.304419   13752 pod_ready.go:81] duration metric: took 9.4239ms for pod "kube-controller-manager-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:06:17.304419   13752 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-47lr8" in "kube-system" namespace to be "Ready" ...
	I0612 15:06:17.468181   13752 request.go:629] Waited for 163.5049ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-proxy-47lr8
	I0612 15:06:17.468280   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-proxy-47lr8
	I0612 15:06:17.468349   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:17.468349   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:17.468349   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:17.472267   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:06:17.472267   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:17.472267   13752 round_trippers.go:580]     Audit-Id: 4099e2c6-fa22-4d62-a01a-fda57fcbd95e
	I0612 15:06:17.472267   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:17.472267   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:17.472267   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:17.472267   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:17.472267   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:17 GMT
	I0612 15:06:17.472453   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-47lr8","generateName":"kube-proxy-","namespace":"kube-system","uid":"10b24fa7-8eea-4fbb-ab18-404e853aa7ab","resourceVersion":"1793","creationTimestamp":"2024-06-12T21:39:45Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b44c21bc-e2cc-415b-bc2f-616adabe0681","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b44c21bc-e2cc-415b-bc2f-616adabe0681\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0612 15:06:17.655141   13752 request.go:629] Waited for 181.884ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:06:17.655401   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:06:17.655520   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:17.655520   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:17.655566   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:17.659977   13752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 15:06:17.659977   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:17.659977   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:17.659977   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:17 GMT
	I0612 15:06:17.659977   13752 round_trippers.go:580]     Audit-Id: f186c650-bfd0-4cee-98c1-ef76bc6d3c38
	I0612 15:06:17.660316   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:17.660316   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:17.660316   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:17.660434   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:06:17.661352   13752 pod_ready.go:92] pod "kube-proxy-47lr8" in "kube-system" namespace has status "Ready":"True"
	I0612 15:06:17.661352   13752 pod_ready.go:81] duration metric: took 356.9312ms for pod "kube-proxy-47lr8" in "kube-system" namespace to be "Ready" ...
	I0612 15:06:17.661488   13752 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7jwdg" in "kube-system" namespace to be "Ready" ...
	I0612 15:06:17.856501   13752 request.go:629] Waited for 194.7294ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7jwdg
	I0612 15:06:17.856501   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7jwdg
	I0612 15:06:17.856501   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:17.856501   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:17.856501   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:17.860079   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:06:17.860079   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:17.860079   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:17.860079   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:17.860079   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:17 GMT
	I0612 15:06:17.860079   13752 round_trippers.go:580]     Audit-Id: 5ca5cb14-3561-4a97-9b4e-df25600c7d70
	I0612 15:06:17.860079   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:17.860079   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:17.861101   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7jwdg","generateName":"kube-proxy-","namespace":"kube-system","uid":"643030f7-b876-4243-bacc-04205e88cc9e","resourceVersion":"1748","creationTimestamp":"2024-06-12T21:47:16Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b44c21bc-e2cc-415b-bc2f-616adabe0681","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:47:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b44c21bc-e2cc-415b-bc2f-616adabe0681\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6062 chars]
	I0612 15:06:18.060098   13752 request.go:629] Waited for 197.7279ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m03
	I0612 15:06:18.060462   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m03
	I0612 15:06:18.060462   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:18.060508   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:18.060508   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:18.064915   13752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 15:06:18.065540   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:18.065540   13752 round_trippers.go:580]     Audit-Id: db5203be-bbdc-4c51-ad76-1a303bb0065d
	I0612 15:06:18.065540   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:18.065540   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:18.065627   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:18.065627   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:18.065627   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:18 GMT
	I0612 15:06:18.065897   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m03","uid":"9d457bc2-c46f-4b5d-8023-5c06ef6198c7","resourceVersion":"1913","creationTimestamp":"2024-06-12T21:57:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T14_57_59_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:57:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4399 chars]
	I0612 15:06:18.066510   13752 pod_ready.go:97] node "multinode-025000-m03" hosting pod "kube-proxy-7jwdg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000-m03" has status "Ready":"Unknown"
	I0612 15:06:18.066536   13752 pod_ready.go:81] duration metric: took 405.0463ms for pod "kube-proxy-7jwdg" in "kube-system" namespace to be "Ready" ...
	E0612 15:06:18.066536   13752 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-025000-m03" hosting pod "kube-proxy-7jwdg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-025000-m03" has status "Ready":"Unknown"
	I0612 15:06:18.066536   13752 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tdcdp" in "kube-system" namespace to be "Ready" ...
	I0612 15:06:18.264859   13752 request.go:629] Waited for 198.1942ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tdcdp
	I0612 15:06:18.264859   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tdcdp
	I0612 15:06:18.264859   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:18.265014   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:18.265014   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:18.269250   13752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 15:06:18.269250   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:18.269250   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:18.269250   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:18 GMT
	I0612 15:06:18.269250   13752 round_trippers.go:580]     Audit-Id: 8e4f04e7-e18f-432b-9541-ba06e0420547
	I0612 15:06:18.269250   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:18.269250   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:18.269250   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:18.269626   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tdcdp","generateName":"kube-proxy-","namespace":"kube-system","uid":"b623833c-ce55-46b1-a840-99b3143adac1","resourceVersion":"2151","creationTimestamp":"2024-06-12T21:42:39Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b44c21bc-e2cc-415b-bc2f-616adabe0681","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:42:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b44c21bc-e2cc-415b-bc2f-616adabe0681\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5842 chars]
	I0612 15:06:18.467478   13752 request.go:629] Waited for 196.8774ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:18.467605   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000-m02
	I0612 15:06:18.467605   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:18.467605   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:18.467605   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:18.471330   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:06:18.472181   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:18.472181   13752 round_trippers.go:580]     Audit-Id: 3d59ecb5-0fbb-4bd4-8a7f-5d0b4e7f4ae0
	I0612 15:06:18.472181   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:18.472181   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:18.472181   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:18.472181   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:18.472181   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:18 GMT
	I0612 15:06:18.472638   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000-m02","uid":"50e380ff-ec75-414e-b6bd-965943b855b7","resourceVersion":"2170","creationTimestamp":"2024-06-12T22:06:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_12T15_06_07_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-12T22:06:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3933 chars]
	I0612 15:06:18.473123   13752 pod_ready.go:92] pod "kube-proxy-tdcdp" in "kube-system" namespace has status "Ready":"True"
	I0612 15:06:18.473123   13752 pod_ready.go:81] duration metric: took 406.5859ms for pod "kube-proxy-tdcdp" in "kube-system" namespace to be "Ready" ...
	I0612 15:06:18.473206   13752 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:06:18.653827   13752 request.go:629] Waited for 180.3453ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-025000
	I0612 15:06:18.653933   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-025000
	I0612 15:06:18.653933   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:18.653933   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:18.654069   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:18.657416   13752 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0612 15:06:18.657416   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:18.657416   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:18.657416   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:18.657948   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:18 GMT
	I0612 15:06:18.657948   13752 round_trippers.go:580]     Audit-Id: f3e17f0e-0fca-4846-a5cc-916171b94ef8
	I0612 15:06:18.657948   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:18.657948   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:18.658276   13752 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-025000","namespace":"kube-system","uid":"83b272cb-1286-47d8-bcb1-a66056dff2a5","resourceVersion":"1865","creationTimestamp":"2024-06-12T21:39:31Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"de62e7fd7d0feea82620e745032c1a67","kubernetes.io/config.mirror":"de62e7fd7d0feea82620e745032c1a67","kubernetes.io/config.seen":"2024-06-12T21:39:31.214466565Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-12T21:39:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0612 15:06:18.855871   13752 request.go:629] Waited for 196.6766ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:06:18.855871   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes/multinode-025000
	I0612 15:06:18.855871   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:18.855871   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:18.855871   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:18.859996   13752 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0612 15:06:18.859996   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:18.859996   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:18 GMT
	I0612 15:06:18.859996   13752 round_trippers.go:580]     Audit-Id: b5738e96-a4aa-4c60-a6b8-ab5f4a242595
	I0612 15:06:18.859996   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:18.860463   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:18.860463   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:18.860463   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:18.861286   13752 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-06-12T21:39:28Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0612 15:06:18.861795   13752 pod_ready.go:92] pod "kube-scheduler-multinode-025000" in "kube-system" namespace has status "Ready":"True"
	I0612 15:06:18.861878   13752 pod_ready.go:81] duration metric: took 388.6702ms for pod "kube-scheduler-multinode-025000" in "kube-system" namespace to be "Ready" ...
	I0612 15:06:18.861878   13752 pod_ready.go:38] duration metric: took 1.6036681s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0612 15:06:18.861966   13752 system_svc.go:44] waiting for kubelet service to be running ....
	I0612 15:06:18.873743   13752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 15:06:18.905776   13752 system_svc.go:56] duration metric: took 43.8101ms WaitForService to wait for kubelet
	I0612 15:06:18.905776   13752 kubeadm.go:576] duration metric: took 10.9014821s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0612 15:06:18.905776   13752 node_conditions.go:102] verifying NodePressure condition ...
	I0612 15:06:19.059262   13752 request.go:629] Waited for 153.4854ms due to client-side throttling, not priority and fairness, request: GET:https://172.23.200.184:8443/api/v1/nodes
	I0612 15:06:19.059413   13752 round_trippers.go:463] GET https://172.23.200.184:8443/api/v1/nodes
	I0612 15:06:19.059413   13752 round_trippers.go:469] Request Headers:
	I0612 15:06:19.059413   13752 round_trippers.go:473]     Accept: application/json, */*
	I0612 15:06:19.059413   13752 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0612 15:06:19.065979   13752 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0612 15:06:19.065979   13752 round_trippers.go:577] Response Headers:
	I0612 15:06:19.065979   13752 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 168b388e-3b93-49fa-b67e-6fae0b04eaf7
	I0612 15:06:19.065979   13752 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f97048f5-40ef-4a1d-ab9c-5ead7f903da8
	I0612 15:06:19.065979   13752 round_trippers.go:580]     Date: Wed, 12 Jun 2024 22:06:19 GMT
	I0612 15:06:19.065979   13752 round_trippers.go:580]     Audit-Id: 0179ee51-13d6-4e75-97a4-fd0d5877edfc
	I0612 15:06:19.065979   13752 round_trippers.go:580]     Cache-Control: no-cache, private
	I0612 15:06:19.065979   13752 round_trippers.go:580]     Content-Type: application/json
	I0612 15:06:19.067024   13752 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2175"},"items":[{"metadata":{"name":"multinode-025000","uid":"2c803ed5-2a52-4fda-a802-60fdcedf1771","resourceVersion":"1942","creationTimestamp":"2024-06-12T21:39:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-025000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cb6dc02966a45c042db8db0cb4c78714624c0e97","minikube.k8s.io/name":"multinode-025000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_12T14_39_32_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15609 chars]
	I0612 15:06:19.068103   13752 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 15:06:19.068165   13752 node_conditions.go:123] node cpu capacity is 2
	I0612 15:06:19.068165   13752 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 15:06:19.068165   13752 node_conditions.go:123] node cpu capacity is 2
	I0612 15:06:19.068165   13752 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0612 15:06:19.068271   13752 node_conditions.go:123] node cpu capacity is 2
	I0612 15:06:19.068271   13752 node_conditions.go:105] duration metric: took 162.4941ms to run NodePressure ...
	I0612 15:06:19.068271   13752 start.go:240] waiting for startup goroutines ...
	I0612 15:06:19.068335   13752 start.go:254] writing updated cluster config ...
	I0612 15:06:19.073446   13752 out.go:177] 
	I0612 15:06:19.076540   13752 config.go:182] Loaded profile config "ha-957600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 15:06:19.084693   13752 config.go:182] Loaded profile config "multinode-025000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 15:06:19.085688   13752 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\config.json ...
	I0612 15:06:19.091739   13752 out.go:177] * Starting "multinode-025000-m03" worker node in "multinode-025000" cluster
	I0612 15:06:19.095317   13752 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0612 15:06:19.095317   13752 cache.go:56] Caching tarball of preloaded images
	I0612 15:06:19.095570   13752 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0612 15:06:19.095570   13752 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0612 15:06:19.095570   13752 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-025000\config.json ...
	I0612 15:06:19.099810   13752 start.go:360] acquireMachinesLock for multinode-025000-m03: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0612 15:06:19.099810   13752 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-025000-m03"
	I0612 15:06:19.100656   13752 start.go:96] Skipping create...Using existing machine configuration
	I0612 15:06:19.100656   13752 fix.go:54] fixHost starting: m03
	I0612 15:06:19.100872   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m03 ).state
	I0612 15:06:21.260404   13752 main.go:141] libmachine: [stdout =====>] : Off
	
	I0612 15:06:21.260557   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:06:21.260616   13752 fix.go:112] recreateIfNeeded on multinode-025000-m03: state=Stopped err=<nil>
	W0612 15:06:21.260616   13752 fix.go:138] unexpected machine state, will restart: <nil>
	I0612 15:06:21.265241   13752 out.go:177] * Restarting existing hyperv VM for "multinode-025000-m03" ...
	I0612 15:06:21.267720   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-025000-m03
	I0612 15:06:24.349628   13752 main.go:141] libmachine: [stdout =====>] : 
	I0612 15:06:24.349628   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:06:24.349628   13752 main.go:141] libmachine: Waiting for host to start...
	I0612 15:06:24.349628   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m03 ).state
	I0612 15:06:26.672510   13752 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 15:06:26.672510   13752 main.go:141] libmachine: [stderr =====>] : 
	I0612 15:06:26.672510   13752 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m03 ).networkadapters[0]).ipaddresses[0]
	
	
	==> Docker <==
	Jun 12 22:03:03 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:03.122616734Z" level=info msg="shim disconnected" id=3546a5c00321078fed32a806a318f4e56e89801ea54ea9463adf37f82327b38a namespace=moby
	Jun 12 22:03:03 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:03.123474651Z" level=warning msg="cleaning up after shim disconnected" id=3546a5c00321078fed32a806a318f4e56e89801ea54ea9463adf37f82327b38a namespace=moby
	Jun 12 22:03:03 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:03.123682355Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 12 22:03:13 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:13.819634342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 12 22:03:13 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:13.819751243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 12 22:03:13 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:13.819788644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 22:03:13 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:13.820654753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.004015440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.004176540Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.004193540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.005298945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.006561551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.006633551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.006681251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.006796752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 22:03:36 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:03:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/986567ef57643aec05ae5353795c364b380cb0f13c2ba98b1c4e04897e7b2e46/resolv.conf as [nameserver 172.23.192.1]"
	Jun 12 22:03:36 multinode-025000 cri-dockerd[1271]: time="2024-06-12T22:03:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2434f89aefe0079002e81e136580c67ef1dca28bfa3b4c1e950241aea9663d4a/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.542434894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.542705495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.542742195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.543238997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.606926167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.606994167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.607017268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 12 22:03:36 multinode-025000 dockerd[1050]: time="2024-06-12T22:03:36.607410069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f2a949d407287       8c811b4aec35f                                                                                         3 minutes ago       Running             busybox                   1                   2434f89aefe00       busybox-fc5497c4f-45qqd
	26e5daf354e36       cbb01a7bd410d                                                                                         3 minutes ago       Running             coredns                   1                   986567ef57643       coredns-7db6d8ff4d-vgcxw
	448e057077ddc       6e38f40d628db                                                                                         3 minutes ago       Running             storage-provisioner       2                   5287b61207e62       storage-provisioner
	cccfd1e9fef5e       ac1c61439df46                                                                                         4 minutes ago       Running             kindnet-cni               1                   a20975d81b350       kindnet-bqlg8
	3546a5c003210       6e38f40d628db                                                                                         4 minutes ago       Exited              storage-provisioner       1                   5287b61207e62       storage-provisioner
	227a905829b07       747097150317f                                                                                         4 minutes ago       Running             kube-proxy                1                   435c56b0fbbbb       kube-proxy-47lr8
	6b61f5f6483d5       3861cfcd7c04c                                                                                         4 minutes ago       Running             etcd                      0                   76517193a960a       etcd-multinode-025000
	bbe2d2e51b5f3       91be940803172                                                                                         4 minutes ago       Running             kube-apiserver            0                   20cbfb3fb8531       kube-apiserver-multinode-025000
	7acc8ff0a9317       25a1387cdab82                                                                                         4 minutes ago       Running             kube-controller-manager   1                   a228f6c30fdf4       kube-controller-manager-multinode-025000
	755750ecd1e39       a52dc94f0a912                                                                                         4 minutes ago       Running             kube-scheduler            1                   da184577f0371       kube-scheduler-multinode-025000
	bfc0382d49a48       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Exited              busybox                   0                   84a9b747663ca       busybox-fc5497c4f-45qqd
	e83cf4eef49e4       cbb01a7bd410d                                                                                         27 minutes ago      Exited              coredns                   0                   894c58e9fe752       coredns-7db6d8ff4d-vgcxw
	4d60d82f6bc5d       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              27 minutes ago      Exited              kindnet-cni               0                   92f2d5f19e95e       kindnet-bqlg8
	c4842faba751e       747097150317f                                                                                         27 minutes ago      Exited              kube-proxy                0                   fad98f611536b       kube-proxy-47lr8
	6b021c195669e       a52dc94f0a912                                                                                         27 minutes ago      Exited              kube-scheduler            0                   d9933fdc9ca72       kube-scheduler-multinode-025000
	685d167da53c9       25a1387cdab82                                                                                         27 minutes ago      Exited              kube-controller-manager   0                   bb4351fab502e       kube-controller-manager-multinode-025000
	
	
	==> coredns [26e5daf354e3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9f7dc1bade6b5769fb289c890c4bc60268e74645c2ad6eb7d326d3f775fd92cb51f1ac39274894772e6760c31275de0003978af82f0f289ef8d45827e8140e48
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54952 - 9035 "HINFO IN 225709527310201015.7757756956422223857. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.039110892s
	
	
	==> coredns [e83cf4eef49e] <==
	[INFO] 10.244.1.2:54995 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000106501s
	[INFO] 10.244.1.2:49201 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077401s
	[INFO] 10.244.1.2:60577 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077201s
	[INFO] 10.244.1.2:36057 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000107301s
	[INFO] 10.244.1.2:43898 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064s
	[INFO] 10.244.1.2:49177 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091201s
	[INFO] 10.244.1.2:45207 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000584s
	[INFO] 10.244.0.3:36676 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151001s
	[INFO] 10.244.0.3:60305 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000305802s
	[INFO] 10.244.0.3:37468 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000209201s
	[INFO] 10.244.0.3:34743 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125201s
	[INFO] 10.244.1.2:45035 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000240801s
	[INFO] 10.244.1.2:42306 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000309601s
	[INFO] 10.244.1.2:36509 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000152901s
	[INFO] 10.244.1.2:55614 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000545s
	[INFO] 10.244.0.3:39195 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130301s
	[INFO] 10.244.0.3:34618 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000272902s
	[INFO] 10.244.0.3:44444 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000177201s
	[INFO] 10.244.0.3:35691 - 5 "PTR IN 1.192.23.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001307s
	[INFO] 10.244.1.2:51174 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110501s
	[INFO] 10.244.1.2:41925 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000207401s
	[INFO] 10.244.1.2:44306 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000736s
	[INFO] 10.244.1.2:46158 - 5 "PTR IN 1.192.23.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000547s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-025000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-025000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97
	                    minikube.k8s.io/name=multinode-025000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_12T14_39_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 21:39:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-025000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 22:06:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 22:03:11 +0000   Wed, 12 Jun 2024 21:39:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 22:03:11 +0000   Wed, 12 Jun 2024 21:39:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 22:03:11 +0000   Wed, 12 Jun 2024 21:39:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 22:03:11 +0000   Wed, 12 Jun 2024 22:03:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.200.184
	  Hostname:    multinode-025000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 e65e28dfa5bf4f27a0123e4ae1007793
	  System UUID:                3e5a42d3-ea80-0c4d-ad18-4b76e4f3e22f
	  Boot ID:                    0efecf43-b070-4a8f-b542-4d1fd07306ad
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-45qqd                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 coredns-7db6d8ff4d-vgcxw                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-multinode-025000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m30s
	  kube-system                 kindnet-bqlg8                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-multinode-025000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 kube-controller-manager-multinode-025000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-47lr8                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-multinode-025000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 27m                    kube-proxy       
	  Normal  Starting                 4m27s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)      kubelet          Node multinode-025000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)      kubelet          Node multinode-025000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m (x7 over 27m)      kubelet          Node multinode-025000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    27m                    kubelet          Node multinode-025000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  27m                    kubelet          Node multinode-025000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     27m                    kubelet          Node multinode-025000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 27m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           27m                    node-controller  Node multinode-025000 event: Registered Node multinode-025000 in Controller
	  Normal  NodeReady                27m                    kubelet          Node multinode-025000 status is now: NodeReady
	  Normal  Starting                 4m36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m36s (x8 over 4m36s)  kubelet          Node multinode-025000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m36s (x8 over 4m36s)  kubelet          Node multinode-025000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m36s (x7 over 4m36s)  kubelet          Node multinode-025000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m17s                  node-controller  Node multinode-025000 event: Registered Node multinode-025000 in Controller
	
	
	Name:               multinode-025000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-025000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97
	                    minikube.k8s.io/name=multinode-025000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_12T15_06_07_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 22:06:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-025000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 22:06:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 12 Jun 2024 22:06:16 +0000   Wed, 12 Jun 2024 22:06:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 12 Jun 2024 22:06:16 +0000   Wed, 12 Jun 2024 22:06:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 12 Jun 2024 22:06:16 +0000   Wed, 12 Jun 2024 22:06:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 12 Jun 2024 22:06:16 +0000   Wed, 12 Jun 2024 22:06:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.23.204.132
	  Hostname:    multinode-025000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 cef56d52639d4c60b1b194d7c7de6534
	  System UUID:                3b021c48-8479-f34c-83c2-77b944a77c5e
	  Boot ID:                    bbbc4555-a7e1-445c-bdcb-1d615d654b08
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-95fcs    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 kindnet-v4cqk              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	  kube-system                 kube-proxy-tdcdp           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 50s                kube-proxy       
	  Normal  Starting                 24m                kube-proxy       
	  Normal  NodeHasSufficientMemory  24m (x2 over 24m)  kubelet          Node multinode-025000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x2 over 24m)  kubelet          Node multinode-025000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x2 over 24m)  kubelet          Node multinode-025000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                24m                kubelet          Node multinode-025000-m02 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x2 over 54s)  kubelet          Node multinode-025000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x2 over 54s)  kubelet          Node multinode-025000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x2 over 54s)  kubelet          Node multinode-025000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  54s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           52s                node-controller  Node multinode-025000-m02 event: Registered Node multinode-025000-m02 in Controller
	  Normal  NodeReady                45s                kubelet          Node multinode-025000-m02 status is now: NodeReady
	
	
	Name:               multinode-025000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-025000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cb6dc02966a45c042db8db0cb4c78714624c0e97
	                    minikube.k8s.io/name=multinode-025000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_12T14_57_59_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 12 Jun 2024 21:57:58 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-025000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 12 Jun 2024 21:59:00 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 12 Jun 2024 21:58:06 +0000   Wed, 12 Jun 2024 21:59:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 12 Jun 2024 21:58:06 +0000   Wed, 12 Jun 2024 21:59:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 12 Jun 2024 21:58:06 +0000   Wed, 12 Jun 2024 21:59:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 12 Jun 2024 21:58:06 +0000   Wed, 12 Jun 2024 21:59:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.23.206.72
	  Hostname:    multinode-025000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b62d5e6740dc42d880d6595ac7dd57ae
	  System UUID:                31a13a9b-b7c6-6643-8352-fb322079216a
	  Boot ID:                    a21b9eff-2471-4589-9e35-5845aae64358
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8252q       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-proxy-7jwdg    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 19m                  kube-proxy       
	  Normal  Starting                 9m                   kube-proxy       
	  Normal  NodeAllocatableEnforced  19m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m (x2 over 19m)    kubelet          Node multinode-025000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x2 over 19m)    kubelet          Node multinode-025000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x2 over 19m)    kubelet          Node multinode-025000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                19m                  kubelet          Node multinode-025000-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  9m3s (x2 over 9m3s)  kubelet          Node multinode-025000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m3s (x2 over 9m3s)  kubelet          Node multinode-025000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m3s (x2 over 9m3s)  kubelet          Node multinode-025000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m2s                 node-controller  Node multinode-025000-m03 event: Registered Node multinode-025000-m03 in Controller
	  Normal  NodeReady                8m55s                kubelet          Node multinode-025000-m03 status is now: NodeReady
	  Normal  NodeNotReady             7m16s                node-controller  Node multinode-025000-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           4m17s                node-controller  Node multinode-025000-m03 event: Registered Node multinode-025000-m03 in Controller
	
	
	==> dmesg <==
	              * this clock source is slow. Consider trying other clock sources
	[  +5.508165] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.342262] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.269809] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.259362] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun12 22:01] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.155290] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[Jun12 22:02] systemd-fstab-generator[971]: Ignoring "noauto" option for root device
	[  +0.095843] kauditd_printk_skb: 73 callbacks suppressed
	[  +0.507476] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	[  +0.171390] systemd-fstab-generator[1022]: Ignoring "noauto" option for root device
	[  +0.210222] systemd-fstab-generator[1036]: Ignoring "noauto" option for root device
	[  +2.904531] systemd-fstab-generator[1224]: Ignoring "noauto" option for root device
	[  +0.189304] systemd-fstab-generator[1237]: Ignoring "noauto" option for root device
	[  +0.162041] systemd-fstab-generator[1248]: Ignoring "noauto" option for root device
	[  +0.261611] systemd-fstab-generator[1263]: Ignoring "noauto" option for root device
	[  +0.815328] systemd-fstab-generator[1374]: Ignoring "noauto" option for root device
	[  +0.096217] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.646175] systemd-fstab-generator[1510]: Ignoring "noauto" option for root device
	[  +1.441935] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.624550] kauditd_printk_skb: 20 callbacks suppressed
	[  +3.644538] systemd-fstab-generator[2322]: Ignoring "noauto" option for root device
	[  +8.250122] kauditd_printk_skb: 70 callbacks suppressed
	
	
	==> etcd [6b61f5f6483d] <==
	{"level":"info","ts":"2024-06-12T22:02:27.759011Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-12T22:02:27.759115Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-12T22:02:27.759495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 switched to configuration voters=(13348376537775904388)"}
	{"level":"info","ts":"2024-06-12T22:02:27.759589Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a7fa2563dcb4b7b8","local-member-id":"b93ef5bd064a9684","added-peer-id":"b93ef5bd064a9684","added-peer-peer-urls":["https://172.23.198.154:2380"]}
	{"level":"info","ts":"2024-06-12T22:02:27.760197Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a7fa2563dcb4b7b8","local-member-id":"b93ef5bd064a9684","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-12T22:02:27.761198Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-12T22:02:27.764395Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-12T22:02:27.765492Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b93ef5bd064a9684","initial-advertise-peer-urls":["https://172.23.200.184:2380"],"listen-peer-urls":["https://172.23.200.184:2380"],"advertise-client-urls":["https://172.23.200.184:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.23.200.184:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-12T22:02:27.766195Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-12T22:02:27.766744Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.23.200.184:2380"}
	{"level":"info","ts":"2024-06-12T22:02:27.767384Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.23.200.184:2380"}
	{"level":"info","ts":"2024-06-12T22:02:29.503194Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-12T22:02:29.50332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-12T22:02:29.503351Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 received MsgPreVoteResp from b93ef5bd064a9684 at term 2"}
	{"level":"info","ts":"2024-06-12T22:02:29.503368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 became candidate at term 3"}
	{"level":"info","ts":"2024-06-12T22:02:29.503424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 received MsgVoteResp from b93ef5bd064a9684 at term 3"}
	{"level":"info","ts":"2024-06-12T22:02:29.503456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b93ef5bd064a9684 became leader at term 3"}
	{"level":"info","ts":"2024-06-12T22:02:29.503481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b93ef5bd064a9684 elected leader b93ef5bd064a9684 at term 3"}
	{"level":"info","ts":"2024-06-12T22:02:29.511068Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-12T22:02:29.511381Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-12T22:02:29.511069Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"b93ef5bd064a9684","local-member-attributes":"{Name:multinode-025000 ClientURLs:[https://172.23.200.184:2379]}","request-path":"/0/members/b93ef5bd064a9684/attributes","cluster-id":"a7fa2563dcb4b7b8","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-12T22:02:29.512996Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-12T22:02:29.513243Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-12T22:02:29.514729Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-12T22:02:29.515422Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.23.200.184:2379"}
	
	
	==> kernel <==
	 22:07:01 up 6 min,  0 users,  load average: 0.35, 0.22, 0.11
	Linux multinode-025000 5.10.207 #1 SMP Tue Jun 11 00:16:05 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4d60d82f6bc5] <==
	I0612 21:59:14.718263       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 21:59:24.724311       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 21:59:24.724441       1 main.go:227] handling current node
	I0612 21:59:24.724456       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 21:59:24.724464       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 21:59:24.724785       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 21:59:24.724853       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 21:59:34.737266       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 21:59:34.737410       1 main.go:227] handling current node
	I0612 21:59:34.737425       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 21:59:34.737432       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 21:59:34.738157       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 21:59:34.738269       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 21:59:44.746123       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 21:59:44.746292       1 main.go:227] handling current node
	I0612 21:59:44.746313       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 21:59:44.746332       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 21:59:44.746856       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 21:59:44.746925       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 21:59:54.752611       1 main.go:223] Handling node with IPs: map[172.23.198.154:{}]
	I0612 21:59:54.752658       1 main.go:227] handling current node
	I0612 21:59:54.752671       1 main.go:223] Handling node with IPs: map[172.23.196.105:{}]
	I0612 21:59:54.752678       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 21:59:54.753183       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 21:59:54.753277       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [cccfd1e9fef5] <==
	I0612 22:06:14.262494       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 22:06:24.273990       1 main.go:223] Handling node with IPs: map[172.23.200.184:{}]
	I0612 22:06:24.274034       1 main.go:227] handling current node
	I0612 22:06:24.274047       1 main.go:223] Handling node with IPs: map[172.23.204.132:{}]
	I0612 22:06:24.274053       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 22:06:24.274339       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 22:06:24.274389       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 22:06:34.281543       1 main.go:223] Handling node with IPs: map[172.23.200.184:{}]
	I0612 22:06:34.281658       1 main.go:227] handling current node
	I0612 22:06:34.281674       1 main.go:223] Handling node with IPs: map[172.23.204.132:{}]
	I0612 22:06:34.281682       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 22:06:34.282005       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 22:06:34.282211       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 22:06:44.297036       1 main.go:223] Handling node with IPs: map[172.23.200.184:{}]
	I0612 22:06:44.297153       1 main.go:227] handling current node
	I0612 22:06:44.297183       1 main.go:223] Handling node with IPs: map[172.23.204.132:{}]
	I0612 22:06:44.297191       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 22:06:44.297388       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 22:06:44.297399       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	I0612 22:06:54.311192       1 main.go:223] Handling node with IPs: map[172.23.200.184:{}]
	I0612 22:06:54.311238       1 main.go:227] handling current node
	I0612 22:06:54.311250       1 main.go:223] Handling node with IPs: map[172.23.204.132:{}]
	I0612 22:06:54.311257       1 main.go:250] Node multinode-025000-m02 has CIDR [10.244.1.0/24] 
	I0612 22:06:54.311381       1 main.go:223] Handling node with IPs: map[172.23.206.72:{}]
	I0612 22:06:54.311491       1 main.go:250] Node multinode-025000-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [bbe2d2e51b5f] <==
	I0612 22:02:31.009966       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0612 22:02:31.010019       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0612 22:02:31.010029       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0612 22:02:31.010400       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0612 22:02:31.011993       1 shared_informer.go:320] Caches are synced for configmaps
	I0612 22:02:31.012756       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0612 22:02:31.017182       1 aggregator.go:165] initial CRD sync complete...
	I0612 22:02:31.017223       1 autoregister_controller.go:141] Starting autoregister controller
	I0612 22:02:31.017231       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0612 22:02:31.017238       1 cache.go:39] Caches are synced for autoregister controller
	I0612 22:02:31.018109       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0612 22:02:31.018524       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0612 22:02:31.019519       1 policy_source.go:224] refreshing policies
	I0612 22:02:31.020420       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0612 22:02:31.091331       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0612 22:02:31.909532       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0612 22:02:32.355789       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.23.198.154 172.23.200.184]
	I0612 22:02:32.358485       1 controller.go:615] quota admission added evaluator for: endpoints
	I0612 22:02:32.377254       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0612 22:02:33.727670       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0612 22:02:34.008881       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0612 22:02:34.034607       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0612 22:02:34.157870       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0612 22:02:34.176471       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0612 22:02:52.350035       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.23.200.184]
	
	
	==> kube-controller-manager [685d167da53c] <==
	I0612 21:39:59.529553       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0612 21:42:39.169243       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m02\" does not exist"
	I0612 21:42:39.188142       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025000-m02" podCIDRs=["10.244.1.0/24"]
	I0612 21:42:39.563565       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000-m02"
	I0612 21:42:58.063730       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 21:43:24.138579       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.052538ms"
	I0612 21:43:24.156190       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.434267ms"
	I0612 21:43:24.156677       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.099µs"
	I0612 21:43:24.183391       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.299µs"
	I0612 21:43:26.908415       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.051448ms"
	I0612 21:43:26.908853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34µs"
	I0612 21:43:27.296932       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.474956ms"
	I0612 21:43:27.304566       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.488944ms"
	I0612 21:47:16.485552       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 21:47:16.486568       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m03\" does not exist"
	I0612 21:47:16.503987       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025000-m03" podCIDRs=["10.244.2.0/24"]
	I0612 21:47:19.629018       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025000-m03"
	I0612 21:47:35.032365       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 21:55:19.767980       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 21:57:52.374240       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 21:57:58.774442       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m03\" does not exist"
	I0612 21:57:58.774588       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 21:57:58.809041       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025000-m03" podCIDRs=["10.244.3.0/24"]
	I0612 21:58:06.126407       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 21:59:45.222238       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	
	
	==> kube-controller-manager [7acc8ff0a931] <==
	I0612 22:03:11.878868       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 22:03:24.254264       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.921834ms"
	I0612 22:03:24.256639       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.601µs"
	I0612 22:03:37.832133       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="82.001µs"
	I0612 22:03:37.905221       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.518825ms"
	I0612 22:03:37.905853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.201µs"
	I0612 22:03:37.917312       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.821108ms"
	I0612 22:03:37.917472       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.3µs"
	I0612 22:05:52.854604       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.965832ms"
	I0612 22:05:52.854969       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="255.501µs"
	I0612 22:05:52.880764       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.655896ms"
	I0612 22:05:52.881336       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.4µs"
	E0612 22:06:04.128230       1 gc_controller.go:153] "Failed to get node" err="node \"multinode-025000-m02\" not found" logger="pod-garbage-collector-controller" node="multinode-025000-m02"
	I0612 22:06:07.354274       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025000-m02\" does not exist"
	I0612 22:06:07.369520       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025000-m02" podCIDRs=["10.244.1.0/24"]
	I0612 22:06:09.256311       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76µs"
	I0612 22:06:16.818870       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025000-m02"
	I0612 22:06:16.873685       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.1µs"
	I0612 22:06:24.324559       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="81.6µs"
	I0612 22:06:24.340990       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.201µs"
	I0612 22:06:24.375183       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.1µs"
	I0612 22:06:24.525105       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.1µs"
	I0612 22:06:24.527863       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.5µs"
	I0612 22:06:25.569202       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.705443ms"
	I0612 22:06:25.570715       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.5µs"
	
	
	==> kube-proxy [227a905829b0] <==
	I0612 22:02:33.538961       1 server_linux.go:69] "Using iptables proxy"
	I0612 22:02:33.585761       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.200.184"]
	I0612 22:02:33.754056       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 22:02:33.754118       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 22:02:33.754141       1 server_linux.go:165] "Using iptables Proxier"
	I0612 22:02:33.765449       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 22:02:33.766192       1 server.go:872] "Version info" version="v1.30.1"
	I0612 22:02:33.766246       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 22:02:33.769980       1 config.go:192] "Starting service config controller"
	I0612 22:02:33.770461       1 config.go:101] "Starting endpoint slice config controller"
	I0612 22:02:33.770493       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 22:02:33.770630       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 22:02:33.773852       1 config.go:319] "Starting node config controller"
	I0612 22:02:33.773944       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 22:02:33.870743       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0612 22:02:33.870698       1 shared_informer.go:320] Caches are synced for service config
	I0612 22:02:33.882534       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [c4842faba751] <==
	I0612 21:39:47.407607       1 server_linux.go:69] "Using iptables proxy"
	I0612 21:39:47.423801       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.23.198.154"]
	I0612 21:39:47.480061       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0612 21:39:47.480182       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0612 21:39:47.480205       1 server_linux.go:165] "Using iptables Proxier"
	I0612 21:39:47.484521       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0612 21:39:47.485171       1 server.go:872] "Version info" version="v1.30.1"
	I0612 21:39:47.485535       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 21:39:47.488126       1 config.go:192] "Starting service config controller"
	I0612 21:39:47.488162       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0612 21:39:47.488188       1 config.go:101] "Starting endpoint slice config controller"
	I0612 21:39:47.488197       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0612 21:39:47.488969       1 config.go:319] "Starting node config controller"
	I0612 21:39:47.489001       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0612 21:39:47.588500       1 shared_informer.go:320] Caches are synced for service config
	I0612 21:39:47.588641       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0612 21:39:47.589226       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6b021c195669] <==
	E0612 21:39:29.271839       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0612 21:39:29.275489       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0612 21:39:29.275551       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0612 21:39:29.296739       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0612 21:39:29.297145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0612 21:39:29.433593       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0612 21:39:29.433887       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0612 21:39:29.471880       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0612 21:39:29.471994       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0612 21:39:29.482669       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0612 21:39:29.483008       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0612 21:39:29.569402       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0612 21:39:29.571433       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0612 21:39:29.677906       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0612 21:39:29.677950       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0612 21:39:29.687951       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0612 21:39:29.688054       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0612 21:39:29.780288       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0612 21:39:29.780411       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0612 21:39:29.832564       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0612 21:39:29.832892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0612 21:39:29.889591       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0612 21:39:29.889868       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0612 21:39:32.513980       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0612 22:00:01.172050       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [755750ecd1e3] <==
	I0612 22:02:28.771072       1 serving.go:380] Generated self-signed cert in-memory
	W0612 22:02:31.003959       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0612 22:02:31.004072       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0612 22:02:31.004087       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0612 22:02:31.004098       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0612 22:02:31.034273       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0612 22:02:31.034440       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0612 22:02:31.039288       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0612 22:02:31.039331       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0612 22:02:31.039699       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0612 22:02:31.040018       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0612 22:02:31.139849       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 12 22:03:09 multinode-025000 kubelet[1517]: E0612 22:03:09.617093    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-45qqd" podUID="8736e2b2-a744-4092-ac73-c59700fda8a4"
	Jun 12 22:03:09 multinode-025000 kubelet[1517]: E0612 22:03:09.617405    1517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-vgcxw" podUID="c5bd143a-d39e-46af-9308-0a97bb45729c"
	Jun 12 22:03:13 multinode-025000 kubelet[1517]: I0612 22:03:13.617647    1517 scope.go:117] "RemoveContainer" containerID="3546a5c00321078fed32a806a318f4e56e89801ea54ea9463adf37f82327b38a"
	Jun 12 22:03:25 multinode-025000 kubelet[1517]: I0612 22:03:25.637114    1517 scope.go:117] "RemoveContainer" containerID="0749f44d03561395230c8a60a41853a49502741bf3bcd45acc924d346061f5b0"
	Jun 12 22:03:25 multinode-025000 kubelet[1517]: E0612 22:03:25.663119    1517 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 22:03:25 multinode-025000 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 22:03:25 multinode-025000 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 22:03:25 multinode-025000 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 22:03:25 multinode-025000 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 22:03:25 multinode-025000 kubelet[1517]: I0612 22:03:25.699754    1517 scope.go:117] "RemoveContainer" containerID="2455f315465b9508a3fe1025d7150342eedb3cb09eb5f8fd9b2cbbffe1306db0"
	Jun 12 22:04:25 multinode-025000 kubelet[1517]: E0612 22:04:25.655952    1517 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 22:04:25 multinode-025000 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 22:04:25 multinode-025000 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 22:04:25 multinode-025000 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 22:04:25 multinode-025000 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 22:05:25 multinode-025000 kubelet[1517]: E0612 22:05:25.654509    1517 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 22:05:25 multinode-025000 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 22:05:25 multinode-025000 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 22:05:25 multinode-025000 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 22:05:25 multinode-025000 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 12 22:06:25 multinode-025000 kubelet[1517]: E0612 22:06:25.656587    1517 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 12 22:06:25 multinode-025000 kubelet[1517]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 12 22:06:25 multinode-025000 kubelet[1517]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 12 22:06:25 multinode-025000 kubelet[1517]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 12 22:06:25 multinode-025000 kubelet[1517]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0612 15:06:50.746959    1952 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-025000 -n multinode-025000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-025000 -n multinode-025000: (12.0379681s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-025000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (517.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (314.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-318100 --driver=hyperv
E0612 15:49:51.934436    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
E0612 15:51:13.937247    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-318100 --driver=hyperv: exit status 1 (4m59.8654539s)

                                                
                                                
-- stdout --
	* [NoKubernetes-318100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19044
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-318100" primary control-plane node in "NoKubernetes-318100" cluster
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0612 15:49:48.038319    1584 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-318100 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-318100 -n NoKubernetes-318100
E0612 15:54:51.923828    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-318100 -n NoKubernetes-318100: exit status 6 (14.654872s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0612 15:54:47.967391   15008 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0612 15:55:02.376461   15008 status.go:414] forwarded endpoint: failed to lookup ip for ""

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "NoKubernetes-318100" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (314.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (10800.367s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-837800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperv
E0612 15:56:13.945454    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
E0612 15:56:15.211854    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
panic: test timed out after 3h0m0s
running tests:
	TestNetworkPlugins (19m38s)
	TestNetworkPlugins/group/auto (4m13s)
	TestNetworkPlugins/group/auto/Start (4m13s)
	TestNetworkPlugins/group/enable-default-cni (47s)
	TestNetworkPlugins/group/enable-default-cni/Start (47s)
	TestNetworkPlugins/group/flannel (1m1s)
	TestNetworkPlugins/group/flannel/Start (1m1s)
	TestPause (11m40s)
	TestPause/serial (11m40s)
	TestPause/serial/SecondStartNoReconfiguration (2m13s)
	TestStartStop (12m28s)

                                                
                                                
goroutine 2426 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 2 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000698ea0, 0xc00138fbb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0007842b8, {0x5022020, 0x2a, 0x2a}, {0x2c56718?, 0xa9806f?, 0x50452a0?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0007df860)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0007df860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 7 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000070200)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 42 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 41
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 2379 [syscall, 4 minutes, locked to thread]:
syscall.SyscallN(0x7ffb7eba4de0?, {0xc0014a5bd0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x4f0, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0000516e0)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc001452160)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc001452160)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0013bd040, 0xc001452160)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc0013bd040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc0013bd040, 0xc0008807e0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2224
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 151 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3c80d60, 0xc0001064e0}, 0xc0014e7f50, 0xc0014e7f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3c80d60, 0xc0001064e0}, 0xa0?, 0xc0014e7f50, 0xc0014e7f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3c80d60?, 0xc0001064e0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0014e7fd0?, 0xb6e404?, 0xc000106420?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 179
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 150 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc00069c750, 0x3c)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x26ef780?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000936480)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00069c780)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0004326d0, {0x3c5d280, 0xc0006e6f30}, 0x1, 0xc0001064e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0004326d0, 0x3b9aca00, 0x0, 0x1, 0xc0001064e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 179
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 152 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 151
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2402 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc001452000, 0xc0000de2a0)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2383
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2407 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x9fe71b?, {0xc0019e3b20?, 0x9f7ea5?, 0x50d2700?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x67?, 0xc0019e3b80?, 0x9efdd6?, 0x50d2700?, 0xc0019e3c08?, 0x9e2985?, 0x15bbda80a28?, 0x67?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x730, {0xc001417d6c?, 0x294, 0xa9417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001602c88?, {0xc001417d6c?, 0xa1c1be?, 0x2000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001602c88, {0xc001417d6c, 0x294, 0x294})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000494130, {0xc001417d6c?, 0xc0019e3d98?, 0xe5a?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0017cc390, {0x3c5be40, 0xc000836e60})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c5bf80, 0xc0017cc390}, {0x3c5be40, 0xc000836e60}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3c5bf80, 0xc0017cc390})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4fd5b50?, {0x3c5bf80?, 0xc0017cc390?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3c5bf80, 0xc0017cc390}, {0x3c5bf00, 0xc000494130}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc001b22cc0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2405
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2406 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0xc001568000?, {0xc0013e1b20?, 0x9f7ea5?, 0x50d2700?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc00096e400?, 0xc0013e1b80?, 0x9efdd6?, 0x50d2700?, 0xc0013e1c08?, 0x9e281b?, 0x9d8ba6?, 0x4ff2635?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x614, {0xc000876a10?, 0x5f0, 0xc000876800?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001602788?, {0xc000876a10?, 0xa1c1be?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001602788, {0xc000876a10, 0x5f0, 0x5f0})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000494118, {0xc000876a10?, 0xc0013e1d98?, 0x210?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0017cc360, {0x3c5be40, 0xc0008c2128})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c5bf80, 0xc0017cc360}, {0x3c5be40, 0xc0008c2128}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3c5bf80, 0xc0017cc360})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4fd5b50?, {0x3c5bf80?, 0xc0017cc360?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3c5bf80, 0xc0017cc360}, {0x3c5bf00, 0xc000494118}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc001946750?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2405
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2094 [chan receive, 20 minutes]:
testing.(*T).Run(0xc00084cd00, {0x2bfa819?, 0xa4f48d?}, 0xc0000081f8)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc00084cd00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc00084cd00, 0x37068e0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 178 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0009365a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 165
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 179 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00069c780, 0xc0001064e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 165
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1131 [chan send, 149 minutes]:
os/exec.(*Cmd).watchCtx(0xc001453a20, 0xc0000ded80)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1130
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 947 [chan receive, 151 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001c28740, 0xc0001064e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 906
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2370 [chan receive, 12 minutes]:
testing.(*testContext).waitParallel(0xc0000dd770)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00154b6c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00154b6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00154b6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00154b6c0, 0xc0016d8180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2270
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2270 [chan receive, 12 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00154b040, 0x3706b00)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2109
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2425 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc0018682c0, 0xc0008be720)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2422
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2242 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc0000dd770)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00154ab60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00154ab60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00154ab60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00154ab60, 0xc001396480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2223
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2385 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x15be2fb3a28?, {0xc00089fb20?, 0x9f7ea5?, 0x50d2700?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x4ff184d?, 0xc00089fb80?, 0x9efdd6?, 0x50d2700?, 0xc00089fc08?, 0x9e2985?, 0x15bbda80a28?, 0x77?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x7cc, {0xc0007e2796?, 0x186a, 0xa9417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001952a08?, {0xc0007e2796?, 0xa1c1be?, 0x4000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001952a08, {0xc0007e2796, 0x186a, 0x186a})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000494088, {0xc0007e2796?, 0xc00089fd98?, 0x2000?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000881920, {0x3c5be40, 0xc0008c2038})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c5bf80, 0xc000881920}, {0x3c5be40, 0xc0008c2038}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3c5bf80, 0xc000881920})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4fd5b50?, {0x3c5bf80?, 0xc000881920?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3c5bf80, 0xc000881920}, {0x3c5bf00, 0xc000494088}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc001476370?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2383
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 946 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000861920)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 906
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2424 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc0019ddb20?, 0x9f7ea5?, 0x50d2700?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc0019ddb67?, 0xc0019ddb80?, 0x9efdd6?, 0x50d2700?, 0xc0019ddc08?, 0x9e2985?, 0x15bbda80a28?, 0x67?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x688, {0xc0007e5ca8?, 0x358, 0xa9417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc000863188?, {0xc0007e5ca8?, 0xa1c1be?, 0x2000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc000863188, {0xc0007e5ca8, 0x358, 0x358})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000836e20, {0xc0007e5ca8?, 0xc0019ddd98?, 0x1000?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001946270, {0x3c5be40, 0xc000836e40})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c5bf80, 0xc001946270}, {0x3c5be40, 0xc000836e40}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3c5bf80, 0xc001946270})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4fd5b50?, {0x3c5bf80?, 0xc001946270?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3c5bf80, 0xc001946270}, {0x3c5bf00, 0xc000836e20}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0016c5f80?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2422
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2273 [chan receive, 12 minutes]:
testing.(*testContext).waitParallel(0xc0000dd770)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00154b520)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00154b520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00154b520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00154b520, 0xc0016d8140)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2270
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2382 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc001452160, 0xc0000de3c0)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2379
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2405 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x7ffb7eba4de0?, {0xc001727bd0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x610, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc001da0780)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc001452420)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc001452420)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0013bd520, 0xc001452420)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc0013bd520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc0013bd520, 0xc0017cc1b0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2243
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2423 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0xb25340?, {0xc0019e1b20?, 0x9f7ea5?, 0x50d2700?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x4d?, 0xc0019e1b80?, 0x9efdd6?, 0x50d2700?, 0xc0019e1c08?, 0x9e281b?, 0x9d8ba6?, 0x4d?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x6a4, {0xc0012a19ef?, 0x211, 0xa9417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc000862c88?, {0xc0012a19ef?, 0xa1c1be?, 0x400?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc000862c88, {0xc0012a19ef, 0x211, 0x211})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000836de8, {0xc0012a19ef?, 0xc0019e1d98?, 0x6a?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001946240, {0x3c5be40, 0xc00011c768})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c5bf80, 0xc001946240}, {0x3c5be40, 0xc00011c768}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3c5bf80, 0xc001946240})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4fd5b50?, {0x3c5bf80?, 0xc001946240?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3c5bf80, 0xc001946240}, {0x3c5bf00, 0xc000836de8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x37067f8?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2422
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2380 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x36393a6f672e6970?, {0xc001dfdb20?, 0x9f7ea5?, 0x50d2700?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x363049090a5d3e6c?, 0xc001dfdb80?, 0x9efdd6?, 0x50d2700?, 0xc001dfdc08?, 0x9e281b?, 0x15bbda80a28?, 0x2220646f70207235?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x2a8, {0xc000700a25?, 0x5db, 0xa9417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001952788?, {0xc000700a25?, 0x0?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001952788, {0xc000700a25, 0x5db, 0x5db})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000494030, {0xc000700a25?, 0x15be3282128?, 0x224?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0008813e0, {0x3c5be40, 0xc0008c2018})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c5bf80, 0xc0008813e0}, {0x3c5be40, 0xc0008c2018}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3c5bf80, 0xc0008813e0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4fd5b50?, {0x3c5bf80?, 0xc0008813e0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3c5bf80, 0xc0008813e0}, {0x3c5bf00, 0xc000494030}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0018a27e0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2379
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 757 [IO wait, 162 minutes]:
internal/poll.runtime_pollWait(0x15be30fb160, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0x9efdd6?, 0x50d2700?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc0008516a0, 0xc00083fbb0)
	/usr/local/go/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc000851688, 0x2c0, {0xc001316000?, 0x0?, 0x0?}, 0xc000680808?)
	/usr/local/go/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc000851688, 0xc00083fd90)
	/usr/local/go/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc000851688)
	/usr/local/go/src/net/fd_windows.go:178 +0x54
net.(*TCPListener).accept(0xc0008bc420)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0008bc420)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0000aa0f0, {0x3c73e00, 0xc0008bc420})
	/usr/local/go/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0000aa0f0)
	/usr/local/go/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc00084c000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 754
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 2243 [chan receive, 2 minutes]:
testing.(*T).Run(0xc00154ad00, {0x2bfa81e?, 0x3c55df0?}, 0xc0017cc1b0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00154ad00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5de
testing.tRunner(0xc00154ad00, 0xc001396500)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2223
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2384 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x15be2f83990?, {0xc001519b20?, 0x9f7ea5?, 0x50d2700?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x15be2f8394d?, 0xc001519b80?, 0x9efdd6?, 0x50d2700?, 0xc001519c08?, 0x9e281b?, 0x15bbda80598?, 0x20035?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x404, {0xc000858de7?, 0x219, 0xa9417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001952288?, {0xc000858de7?, 0x0?, 0x400?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001952288, {0xc000858de7, 0x219, 0x219})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000494058, {0xc000858de7?, 0x15be2fb24e8?, 0x68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0008816e0, {0x3c5be40, 0xc000836010})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c5bf80, 0xc0008816e0}, {0x3c5be40, 0xc000836010}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3c5bf80, 0xc0008816e0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4fd5b50?, {0x3c5bf80?, 0xc0008816e0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3c5bf80, 0xc0008816e0}, {0x3c5bf00, 0xc000494058}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc001c7e2a0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2383
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 879 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001c28710, 0x36)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x26ef780?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000861800)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001c28740)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001320250, {0x3c5d280, 0xc001885d10}, 0x1, 0xc0001064e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001320250, 0x3b9aca00, 0x0, 0x1, 0xc0001064e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 947
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2408 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc001452420, 0xc0000de780)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2405
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2360 [chan receive, 2 minutes]:
testing.(*T).Run(0xc0013bcd00, {0x2c39a35?, 0x24?}, 0xc00012f100)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestPause.func1(0xc0013bcd00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:65 +0x1ee
testing.tRunner(0xc0013bcd00, 0xc0017cc1e0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2096
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 880 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3c80d60, 0xc0001064e0}, 0xc001357f50, 0xc001357f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3c80d60, 0xc0001064e0}, 0xa0?, 0xc001357f50, 0xc001357f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3c80d60?, 0xc0001064e0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001357fd0?, 0xb6e404?, 0xc0001072c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 947
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1382 [chan send, 143 minutes]:
os/exec.(*Cmd).watchCtx(0xc001f2e160, 0xc002056300)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 781
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 881 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 880
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2271 [chan receive, 12 minutes]:
testing.(*testContext).waitParallel(0xc0000dd770)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00154b1e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00154b1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00154b1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00154b1e0, 0xc0016d80c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2270
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2231 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc0000dd770)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000699860)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000699860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000699860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000699860, 0xc000822400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2223
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2372 [chan receive, 12 minutes]:
testing.(*testContext).waitParallel(0xc0000dd770)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00154ba00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00154ba00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00154ba00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00154ba00, 0xc0016d8240)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2270
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2225 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc0000dd770)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00154a9c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00154a9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00154a9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00154a9c0, 0xc001396400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2223
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2223 [chan receive, 20 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00154a340, 0xc0000081f8)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2094
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2224 [chan receive, 4 minutes]:
testing.(*T).Run(0xc00154a820, {0x2bfa81e?, 0x3c55df0?}, 0xc0008807e0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00154a820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5de
testing.tRunner(0xc00154a820, 0xc001396380)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2223
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2371 [chan receive, 12 minutes]:
testing.(*testContext).waitParallel(0xc0000dd770)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00154b860)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00154b860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00154b860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00154b860, 0xc0016d81c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2270
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2381 [syscall, locked to thread]:
syscall.SyscallN(0x15be345d250?, {0xc0012e9b20?, 0x9f7ea5?, 0x8?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x15be345d250?, 0xc0012e9b80?, 0x9efdd6?, 0x50d2700?, 0xc0012e9c08?, 0x9e2985?, 0x0?, 0x10000?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x418, {0xc00149a08f?, 0x7f71, 0xa9417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001952c88?, {0xc00149a08f?, 0xa1c1be?, 0x10000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001952c88, {0xc00149a08f, 0x7f71, 0x7f71})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000494070, {0xc00149a08f?, 0xc001812700?, 0x7e4e?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0008814d0, {0x3c5be40, 0xc000824010})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c5bf80, 0xc0008814d0}, {0x3c5be40, 0xc000824010}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0012e9e78?, {0x3c5bf80, 0xc0008814d0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4fd5b50?, {0x3c5bf80?, 0xc0008814d0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3c5bf80, 0xc0008814d0}, {0x3c5bf00, 0xc000494070}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0018a2600?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2379
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2383 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x7ffb7eba4de0?, {0xc00006ba10?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x7c4, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc000051800)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc001452000)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc001452000)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0013bd380, 0xc001452000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateStartNoReconfigure({0x3c80ba0, 0xc0000f8230}, 0xc0013bd380, {0xc001bd2be0?, 0xc00377d780?})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:92 +0x245
k8s.io/minikube/test/integration.TestPause.func1.1(0xc0013bd380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:66 +0x43
testing.tRunner(0xc0013bd380, 0xc00012f100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2360
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2229 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc0000dd770)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000698b60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000698b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000698b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000698b60, 0xc000822300)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2223
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2228 [chan receive, 2 minutes]:
testing.(*T).Run(0xc000698820, {0x2bfa81e?, 0x3c55df0?}, 0xc001946150)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000698820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5de
testing.tRunner(0xc000698820, 0xc000822280)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2223
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2232 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc0000dd770)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013bcb60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013bcb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013bcb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013bcb60, 0xc000822480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2223
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2272 [chan receive, 12 minutes]:
testing.(*testContext).waitParallel(0xc0000dd770)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00154b380)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00154b380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00154b380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00154b380, 0xc0016d8100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2270
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2422 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x7ffb7eba4de0?, {0xc00138fbd0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x7e8, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0017712f0)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0018682c0)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0018682c0)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0015d2b60, 0xc0018682c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc0015d2b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc0015d2b60, 0xc001946150)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2228
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2109 [chan receive, 12 minutes]:
testing.(*T).Run(0xc00084d860, {0x2bfa819?, 0xb27333?}, 0x3706b00)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc00084d860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc00084d860, 0x3706928)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2230 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc0000dd770)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000699380)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000699380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000699380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000699380, 0xc000822380)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2223
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2096 [chan receive, 12 minutes]:
testing.(*T).Run(0xc00084d1e0, {0x2bfbd2c?, 0xd18c2e2800?}, 0xc0017cc1e0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestPause(0xc00084d1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:41 +0x159
testing.tRunner(0xc00084d1e0, 0x37068f8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                    

Test pass (156/200)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 23.52
4 TestDownloadOnly/v1.20.0/preload-exists 0.01
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.19
9 TestDownloadOnly/v1.20.0/DeleteAll 1.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.16
12 TestDownloadOnly/v1.30.1/json-events 14.77
13 TestDownloadOnly/v1.30.1/preload-exists 0
16 TestDownloadOnly/v1.30.1/kubectl 0
17 TestDownloadOnly/v1.30.1/LogsDuration 0.2
18 TestDownloadOnly/v1.30.1/DeleteAll 1.1
19 TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds 1.18
21 TestBinaryMirror 7.16
22 TestOffline 252.98
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.19
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.19
27 TestAddons/Setup 444.7
30 TestAddons/parallel/Ingress 67.74
31 TestAddons/parallel/InspektorGadget 32.28
32 TestAddons/parallel/MetricsServer 23.24
33 TestAddons/parallel/HelmTiller 28.89
35 TestAddons/parallel/CSI 98.09
36 TestAddons/parallel/Headlamp 36.94
37 TestAddons/parallel/CloudSpanner 20.72
38 TestAddons/parallel/LocalPath 30.7
39 TestAddons/parallel/NvidiaDevicePlugin 20.54
40 TestAddons/parallel/Yakd 5.02
41 TestAddons/parallel/Volcano 52.24
44 TestAddons/serial/GCPAuth/Namespaces 0.35
45 TestAddons/StoppedEnableDisable 53.1
46 TestCertOptions 495.16
47 TestCertExpiration 902.36
48 TestDockerFlags 434.92
49 TestForceSystemdFlag 537.42
50 TestForceSystemdEnv 417.26
57 TestErrorSpam/start 16.63
58 TestErrorSpam/status 35.27
59 TestErrorSpam/pause 21.75
60 TestErrorSpam/unpause 22.02
61 TestErrorSpam/stop 55.43
64 TestFunctional/serial/CopySyncFile 0.03
65 TestFunctional/serial/StartWithProxy 236.13
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 124.1
68 TestFunctional/serial/KubeContext 0.14
69 TestFunctional/serial/KubectlGetPods 0.23
72 TestFunctional/serial/CacheCmd/cache/add_remote 25.68
73 TestFunctional/serial/CacheCmd/cache/add_local 10.7
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.17
75 TestFunctional/serial/CacheCmd/cache/list 0.17
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 8.97
77 TestFunctional/serial/CacheCmd/cache/cache_reload 35.22
78 TestFunctional/serial/CacheCmd/cache/delete 0.37
79 TestFunctional/serial/MinikubeKubectlCmd 0.43
81 TestFunctional/serial/ExtraConfig 125.99
82 TestFunctional/serial/ComponentHealth 0.18
83 TestFunctional/serial/LogsCmd 8.28
84 TestFunctional/serial/LogsFileCmd 10.21
85 TestFunctional/serial/InvalidService 20.42
91 TestFunctional/parallel/StatusCmd 42.16
95 TestFunctional/parallel/ServiceCmdConnect 29.06
96 TestFunctional/parallel/AddonsCmd 0.64
97 TestFunctional/parallel/PersistentVolumeClaim 40.19
99 TestFunctional/parallel/SSHCmd 20.63
100 TestFunctional/parallel/CpCmd 60.6
101 TestFunctional/parallel/MySQL 67
102 TestFunctional/parallel/FileSync 11.1
103 TestFunctional/parallel/CertSync 64.38
107 TestFunctional/parallel/NodeLabels 0.19
109 TestFunctional/parallel/NonActiveRuntimeDisabled 11.39
111 TestFunctional/parallel/License 3.22
112 TestFunctional/parallel/DockerEnv/powershell 45.96
113 TestFunctional/parallel/UpdateContextCmd/no_changes 3.37
114 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.82
115 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.76
116 TestFunctional/parallel/ImageCommands/ImageListShort 7.85
117 TestFunctional/parallel/ImageCommands/ImageListTable 7.29
118 TestFunctional/parallel/ImageCommands/ImageListJson 7.95
119 TestFunctional/parallel/ImageCommands/ImageListYaml 8.01
120 TestFunctional/parallel/ImageCommands/ImageBuild 27.03
121 TestFunctional/parallel/ImageCommands/Setup 6.01
122 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 23.45
123 TestFunctional/parallel/ServiceCmd/DeployApp 21.47
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 20.98
125 TestFunctional/parallel/ServiceCmd/List 14.28
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 8.89
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
130 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 30.56
131 TestFunctional/parallel/ServiceCmd/JSONOutput 14.96
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 27.52
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 9.74
140 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
142 TestFunctional/parallel/ImageCommands/ImageRemove 16.82
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 19.45
145 TestFunctional/parallel/ProfileCmd/profile_not_create 11.89
146 TestFunctional/parallel/ProfileCmd/profile_list 11.46
147 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 10.72
148 TestFunctional/parallel/ProfileCmd/profile_json_output 11.89
149 TestFunctional/parallel/Version/short 0.23
150 TestFunctional/parallel/Version/components 8.53
151 TestFunctional/delete_addon-resizer_images 0.45
152 TestFunctional/delete_my-image_image 0.17
153 TestFunctional/delete_minikube_cached_images 0.17
157 TestMultiControlPlane/serial/StartCluster 718.4
158 TestMultiControlPlane/serial/DeployApp 12.62
160 TestMultiControlPlane/serial/AddWorkerNode 258.53
161 TestMultiControlPlane/serial/NodeLabels 0.19
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 29.15
163 TestMultiControlPlane/serial/CopyFile 642.76
167 TestImageBuild/serial/Setup 191.34
168 TestImageBuild/serial/NormalBuild 9.39
169 TestImageBuild/serial/BuildWithBuildArg 8.76
170 TestImageBuild/serial/BuildWithDockerIgnore 7.51
171 TestImageBuild/serial/BuildWithSpecifiedDockerfile 7.21
175 TestJSONOutput/start/Command 203.82
176 TestJSONOutput/start/Audit 0
178 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Command 7.48
182 TestJSONOutput/pause/Audit 0
184 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Command 7.41
188 TestJSONOutput/unpause/Audit 0
190 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/stop/Command 39.75
194 TestJSONOutput/stop/Audit 0
196 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
198 TestErrorJSONOutput 1.28
203 TestMainNoArgs 0.16
204 TestMinikubeProfile 522.66
207 TestMountStart/serial/StartWithMountFirst 155.04
208 TestMountStart/serial/VerifyMountFirst 9.56
209 TestMountStart/serial/StartWithMountSecond 156.35
210 TestMountStart/serial/VerifyMountSecond 9.46
211 TestMountStart/serial/DeleteFirst 26.72
212 TestMountStart/serial/VerifyMountPostDelete 8.81
213 TestMountStart/serial/Stop 29.01
217 TestMultiNode/serial/FreshStart2Nodes 412.37
218 TestMultiNode/serial/DeployApp2Nodes 8.41
220 TestMultiNode/serial/AddNode 225.64
221 TestMultiNode/serial/MultiNodeLabels 0.18
222 TestMultiNode/serial/ProfileList 9.8
223 TestMultiNode/serial/CopyFile 359.99
224 TestMultiNode/serial/StopNode 76.66
225 TestMultiNode/serial/StartAfterStop 184.36
230 TestPreload 507.76
231 TestScheduledStopWindows 323.23
236 TestRunningBinaryUpgrade 1127.22
238 TestKubernetesUpgrade 1269.08
240 TestStoppedBinaryUpgrade/Setup 0.81
241 TestStoppedBinaryUpgrade/Upgrade 955.83
253 TestStoppedBinaryUpgrade/MinikubeLogs 10.33
264 TestNoKubernetes/serial/StartNoK8sWithVersion 0.21
x
+
TestDownloadOnly/v1.20.0/json-events (23.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-328400 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-328400 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (23.5214094s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (23.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-328400
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-328400: exit status 85 (187.2042ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-328400 | minikube1\jenkins | v1.33.1 | 12 Jun 24 12:56 PDT |          |
	|         | -p download-only-328400        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/12 12:56:34
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0612 12:56:34.488319   13300 out.go:291] Setting OutFile to fd 628 ...
	I0612 12:56:34.489368   13300 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 12:56:34.489368   13300 out.go:304] Setting ErrFile to fd 632...
	I0612 12:56:34.489368   13300 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0612 12:56:34.502601   13300 root.go:314] Error reading config file at C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0612 12:56:34.515160   13300 out.go:298] Setting JSON to true
	I0612 12:56:34.519087   13300 start.go:129] hostinfo: {"hostname":"minikube1","uptime":20547,"bootTime":1718201647,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4529 Build 19045.4529","kernelVersion":"10.0.19045.4529 Build 19045.4529","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0612 12:56:34.519280   13300 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0612 12:56:34.526827   13300 out.go:97] [download-only-328400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	I0612 12:56:34.530012   13300 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 12:56:34.526987   13300 notify.go:220] Checking for updates...
	W0612 12:56:34.526987   13300 preload.go:294] Failed to list preload files: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0612 12:56:34.537642   13300 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0612 12:56:34.542497   13300 out.go:169] MINIKUBE_LOCATION=19044
	I0612 12:56:34.547762   13300 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0612 12:56:34.558044   13300 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0612 12:56:34.559440   13300 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 12:56:39.875403   13300 out.go:97] Using the hyperv driver based on user configuration
	I0612 12:56:39.875633   13300 start.go:297] selected driver: hyperv
	I0612 12:56:39.875633   13300 start.go:901] validating driver "hyperv" against <nil>
	I0612 12:56:39.875938   13300 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0612 12:56:39.925560   13300 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0612 12:56:39.927117   13300 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0612 12:56:39.927117   13300 cni.go:84] Creating CNI manager for ""
	I0612 12:56:39.927117   13300 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0612 12:56:39.927117   13300 start.go:340] cluster config:
	{Name:download-only-328400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-328400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 12:56:39.928674   13300 iso.go:125] acquiring lock: {Name:mk052eb609047b80b971cea5054470b0706b5b41 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 12:56:39.932869   13300 out.go:97] Downloading VM boot image ...
	I0612 12:56:39.932869   13300 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19044/minikube-v1.33.1-1718047936-19044-amd64.iso.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.33.1-1718047936-19044-amd64.iso
	I0612 12:56:45.337642   13300 out.go:97] Starting "download-only-328400" primary control-plane node in "download-only-328400" cluster
	I0612 12:56:45.337835   13300 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0612 12:56:45.383682   13300 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0612 12:56:45.383783   13300 cache.go:56] Caching tarball of preloaded images
	I0612 12:56:45.384345   13300 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0612 12:56:45.387398   13300 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0612 12:56:45.387527   13300 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0612 12:56:45.461031   13300 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0612 12:56:50.853442   13300 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0612 12:56:50.855559   13300 preload.go:255] verifying checksum of C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0612 12:56:51.884064   13300 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0612 12:56:51.884944   13300 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-328400\config.json ...
	I0612 12:56:51.885552   13300 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-328400\config.json: {Name:mk479956daf473739c829ab3cdc1b34347e70a55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0612 12:56:51.886809   13300 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0612 12:56:51.888076   13300 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\windows\amd64\v1.20.0/kubectl.exe
	
	
	* The control-plane node download-only-328400 host does not exist
	  To start a cluster, run: "minikube start -p download-only-328400"

                                                
                                                
-- /stdout --
** stderr ** 
	W0612 12:56:58.001615    8284 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.1400688s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-328400
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-328400: (1.1551733s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/json-events (14.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-880500 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-880500 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=docker --driver=hyperv: (14.7677422s)
--- PASS: TestDownloadOnly/v1.30.1/json-events (14.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/kubectl
--- PASS: TestDownloadOnly/v1.30.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/LogsDuration (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-880500
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-880500: exit status 85 (194.031ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-328400 | minikube1\jenkins | v1.33.1 | 12 Jun 24 12:56 PDT |                     |
	|         | -p download-only-328400        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube1\jenkins | v1.33.1 | 12 Jun 24 12:56 PDT | 12 Jun 24 12:56 PDT |
	| delete  | -p download-only-328400        | download-only-328400 | minikube1\jenkins | v1.33.1 | 12 Jun 24 12:56 PDT | 12 Jun 24 12:57 PDT |
	| start   | -o=json --download-only        | download-only-880500 | minikube1\jenkins | v1.33.1 | 12 Jun 24 12:57 PDT |                     |
	|         | -p download-only-880500        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.1   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/12 12:57:00
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0612 12:57:00.493395    7136 out.go:291] Setting OutFile to fd 732 ...
	I0612 12:57:00.493908    7136 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 12:57:00.493908    7136 out.go:304] Setting ErrFile to fd 736...
	I0612 12:57:00.493908    7136 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 12:57:00.520095    7136 out.go:298] Setting JSON to true
	I0612 12:57:00.523832    7136 start.go:129] hostinfo: {"hostname":"minikube1","uptime":20573,"bootTime":1718201647,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4529 Build 19045.4529","kernelVersion":"10.0.19045.4529 Build 19045.4529","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0612 12:57:00.523832    7136 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0612 12:57:00.529792    7136 out.go:97] [download-only-880500] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	I0612 12:57:00.529792    7136 notify.go:220] Checking for updates...
	I0612 12:57:00.532387    7136 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 12:57:00.534754    7136 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0612 12:57:00.537686    7136 out.go:169] MINIKUBE_LOCATION=19044
	I0612 12:57:00.539933    7136 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0612 12:57:00.544659    7136 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0612 12:57:00.544659    7136 driver.go:392] Setting default libvirt URI to qemu:///system
	I0612 12:57:05.956205    7136 out.go:97] Using the hyperv driver based on user configuration
	I0612 12:57:05.956304    7136 start.go:297] selected driver: hyperv
	I0612 12:57:05.956388    7136 start.go:901] validating driver "hyperv" against <nil>
	I0612 12:57:05.956586    7136 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0612 12:57:06.002620    7136 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0612 12:57:06.003994    7136 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0612 12:57:06.003994    7136 cni.go:84] Creating CNI manager for ""
	I0612 12:57:06.003994    7136 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0612 12:57:06.003994    7136 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0612 12:57:06.003994    7136 start.go:340] cluster config:
	{Name:download-only-880500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718016726-19044@sha256:44021a7ae98037938951ca79da6077ed81d15edb2d34c692701c3e2fea4d176a Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:download-only-880500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0612 12:57:06.004605    7136 iso.go:125] acquiring lock: {Name:mk052eb609047b80b971cea5054470b0706b5b41 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0612 12:57:06.008297    7136 out.go:97] Starting "download-only-880500" primary control-plane node in "download-only-880500" cluster
	I0612 12:57:06.008297    7136 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0612 12:57:06.057407    7136 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0612 12:57:06.057639    7136 cache.go:56] Caching tarball of preloaded images
	I0612 12:57:06.058309    7136 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0612 12:57:06.061618    7136 out.go:97] Downloading Kubernetes v1.30.1 preload ...
	I0612 12:57:06.061618    7136 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 ...
	I0612 12:57:06.130596    7136 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4?checksum=md5:f110de85c4cd01fa5de0726fbc529387 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-880500 host does not exist
	  To start a cluster, run: "minikube start -p download-only-880500"

                                                
                                                
-- /stdout --
** stderr ** 
	W0612 12:57:15.256127   11408 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.1/LogsDuration (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAll (1.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.0975494s)
--- PASS: TestDownloadOnly/v1.30.1/DeleteAll (1.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (1.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-880500
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-880500: (1.1745269s)
--- PASS: TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (1.18s)

                                                
                                    
x
+
TestBinaryMirror (7.16s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-097300 --alsologtostderr --binary-mirror http://127.0.0.1:58105 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-097300 --alsologtostderr --binary-mirror http://127.0.0.1:58105 --driver=hyperv: (6.2821085s)
helpers_test.go:175: Cleaning up "binary-mirror-097300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-097300
--- PASS: TestBinaryMirror (7.16s)

                                                
                                    
x
+
TestOffline (252.98s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-253200 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-253200 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (3m32.5268328s)
helpers_test.go:175: Cleaning up "offline-docker-253200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-253200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-253200: (40.4420416s)
--- PASS: TestOffline (252.98s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-605800
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-605800: exit status 85 (193.5685ms)

                                                
                                                
-- stdout --
	* Profile "addons-605800" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-605800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0612 12:57:27.105855    8468 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-605800
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-605800: exit status 85 (190.667ms)

                                                
                                                
-- stdout --
	* Profile "addons-605800" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-605800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0612 12:57:27.107844    7936 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/Setup (444.7s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-605800 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-605800 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (7m24.6964178s)
--- PASS: TestAddons/Setup (444.70s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (67.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-605800 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-605800 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-605800 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [aef61866-3d51-4789-832a-145a267b41e2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [aef61866-3d51-4789-832a-145a267b41e2] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.0178962s
addons_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-605800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Done: out/minikube-windows-amd64.exe -p addons-605800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (10.3140587s)
addons_test.go:271: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-605800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0612 13:05:37.270189    7428 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:288: (dbg) Run:  kubectl --context addons-605800 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-605800 ip
addons_test.go:293: (dbg) Done: out/minikube-windows-amd64.exe -p addons-605800 ip: (2.8783497s)
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 172.23.204.232
addons_test.go:299: (dbg) Done: nslookup hello-john.test 172.23.204.232: (1.3331346s)
addons_test.go:308: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-605800 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-windows-amd64.exe -p addons-605800 addons disable ingress-dns --alsologtostderr -v=1: (17.028871s)
addons_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-605800 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe -p addons-605800 addons disable ingress --alsologtostderr -v=1: (22.5047063s)
--- PASS: TestAddons/parallel/Ingress (67.74s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (32.28s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-j8w5r" [8078700f-0fd4-43b4-8a9b-2d1e4f18394f] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0255574s
addons_test.go:843: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-605800
addons_test.go:843: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-605800: (27.2424535s)
--- PASS: TestAddons/parallel/InspektorGadget (32.28s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (23.24s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.9941ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-84pkp" [3e4dbc95-8897-4cf6-be06-cbe64e3ad32b] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.0160644s
addons_test.go:417: (dbg) Run:  kubectl --context addons-605800 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-605800 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:434: (dbg) Done: out/minikube-windows-amd64.exe -p addons-605800 addons disable metrics-server --alsologtostderr -v=1: (17.0055698s)
--- PASS: TestAddons/parallel/MetricsServer (23.24s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (28.89s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 15.9744ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-kwghn" [44e46616-6917-4c54-8768-9c6344cdbc67] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0115643s
addons_test.go:475: (dbg) Run:  kubectl --context addons-605800 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-605800 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.4035314s)
addons_test.go:492: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-605800 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:492: (dbg) Done: out/minikube-windows-amd64.exe -p addons-605800 addons disable helm-tiller --alsologtostderr -v=1: (16.4370403s)
--- PASS: TestAddons/parallel/HelmTiller (28.89s)

                                                
                                    
x
+
TestAddons/parallel/CSI (98.09s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 10.8829ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-605800 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-605800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-605800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-605800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-605800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-605800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-605800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-605800 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-605800 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ad273e07-0e40-4294-a1f3-bb7746050258] Pending
helpers_test.go:344: "task-pv-pod" [ad273e07-0e40-4294-a1f3-bb7746050258] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ad273e07-0e40-4294-a1f3-bb7746050258] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 24.0936934s
addons_test.go:586: (dbg) Run:  kubectl --context addons-605800 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-605800 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-605800 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-605800 delete pod task-pv-pod
addons_test.go:596: (dbg) Done: kubectl --context addons-605800 delete pod task-pv-pod: (1.4914762s)
addons_test.go:602: (dbg) Run:  kubectl --context addons-605800 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-605800 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-605800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-605800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-605800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-605800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-605800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-605800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-605800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-605800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-605800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-605800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-605800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-605800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-605800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-605800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-605800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-605800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-605800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-605800 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [86ee14a3-d902-4f3a-b53c-8b9d30151382] Pending
helpers_test.go:344: "task-pv-pod-restore" [86ee14a3-d902-4f3a-b53c-8b9d30151382] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [86ee14a3-d902-4f3a-b53c-8b9d30151382] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.0206922s
addons_test.go:628: (dbg) Run:  kubectl --context addons-605800 delete pod task-pv-pod-restore
addons_test.go:628: (dbg) Done: kubectl --context addons-605800 delete pod task-pv-pod-restore: (1.471031s)
addons_test.go:632: (dbg) Run:  kubectl --context addons-605800 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-605800 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-605800 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-windows-amd64.exe -p addons-605800 addons disable csi-hostpath-driver --alsologtostderr -v=1: (21.6885716s)
addons_test.go:644: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-605800 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-windows-amd64.exe -p addons-605800 addons disable volumesnapshots --alsologtostderr -v=1: (15.1930083s)
--- PASS: TestAddons/parallel/CSI (98.09s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (36.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-605800 --alsologtostderr -v=1
addons_test.go:826: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-605800 --alsologtostderr -v=1: (16.8872307s)
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7fc69f7444-7f6zf" [7892fbab-7d98-44c6-aa09-85f2ace375d5] Pending
helpers_test.go:344: "headlamp-7fc69f7444-7f6zf" [7892fbab-7d98-44c6-aa09-85f2ace375d5] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7fc69f7444-7f6zf" [7892fbab-7d98-44c6-aa09-85f2ace375d5] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 20.0463114s
--- PASS: TestAddons/parallel/Headlamp (36.94s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (20.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-dml7c" [8601b238-263d-4a67-994e-01280a13bee8] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.026908s
addons_test.go:862: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-605800
addons_test.go:862: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-605800: (15.6810907s)
--- PASS: TestAddons/parallel/CloudSpanner (20.72s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (30.7s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-605800 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-605800 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-605800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-605800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-605800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-605800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-605800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-605800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-605800 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [301f8b4a-e725-4741-97bf-65aba15ce370] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [301f8b4a-e725-4741-97bf-65aba15ce370] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [301f8b4a-e725-4741-97bf-65aba15ce370] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.0240041s
addons_test.go:992: (dbg) Run:  kubectl --context addons-605800 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-605800 ssh "cat /opt/local-path-provisioner/pvc-13bdd76f-efa9-4a83-9f26-b94e03561306_default_test-pvc/file1"
addons_test.go:1001: (dbg) Done: out/minikube-windows-amd64.exe -p addons-605800 ssh "cat /opt/local-path-provisioner/pvc-13bdd76f-efa9-4a83-9f26-b94e03561306_default_test-pvc/file1": (10.1388913s)
addons_test.go:1013: (dbg) Run:  kubectl --context addons-605800 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-605800 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-605800 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-windows-amd64.exe -p addons-605800 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (8.0047186s)
--- PASS: TestAddons/parallel/LocalPath (30.70s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (20.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-xqxxl" [c324b44a-88aa-4bbe-bc01-3e7e8f576efc] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0133508s
addons_test.go:1056: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-605800
addons_test.go:1056: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-605800: (15.5251197s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (20.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-4spfm" [9e77cc1b-b61e-4e16-bb3d-f9477f72c608] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.0143621s
--- PASS: TestAddons/parallel/Yakd (5.02s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (52.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:905: volcano-controller stabilized in 8.3657ms
addons_test.go:889: volcano-scheduler stabilized in 9.3296ms
addons_test.go:897: volcano-admission stabilized in 9.3296ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-765f888978-wnvbv" [3267c8c6-14ae-4b64-bb0d-a0e507e9051c] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 5.0166724s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-7b497cf95b-t7tlr" [bb62d6f5-67ff-4a6f-bb90-b26fadcc4304] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 5.0321575s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controller-86c5446455-2rpt8" [b89c1e57-9f8f-4e94-8a7a-4dd2a74014f6] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.0083444s
addons_test.go:924: (dbg) Run:  kubectl --context addons-605800 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-605800 create -f testdata\vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-605800 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [9eddea1d-55ab-4198-b12c-fd8f505d1000] Pending
helpers_test.go:344: "test-job-nginx-0" [9eddea1d-55ab-4198-b12c-fd8f505d1000] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [9eddea1d-55ab-4198-b12c-fd8f505d1000] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 11.0212062s
addons_test.go:960: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-605800 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-windows-amd64.exe -p addons-605800 addons disable volcano --alsologtostderr -v=1: (25.2891999s)
--- PASS: TestAddons/parallel/Volcano (52.24s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.35s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-605800 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-605800 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.35s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (53.1s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-605800
addons_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-605800: (41.2034547s)
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-605800
addons_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-605800: (4.6793217s)
addons_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-605800
addons_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-605800: (4.6929661s)
addons_test.go:187: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-605800
addons_test.go:187: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-605800: (2.5065146s)
--- PASS: TestAddons/StoppedEnableDisable (53.10s)

                                                
                                    
x
+
TestCertOptions (495.16s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-574400 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-574400 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (7m8.0027943s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-574400 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-574400 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (10.1776295s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-574400 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-574400 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-574400 -- "sudo cat /etc/kubernetes/admin.conf": (9.927418s)
helpers_test.go:175: Cleaning up "cert-options-574400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-574400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-574400: (46.9101859s)
--- PASS: TestCertOptions (495.16s)

                                                
                                    
x
+
TestCertExpiration (902.36s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-635800 --memory=2048 --cert-expiration=3m --driver=hyperv
E0612 15:41:13.942311    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-635800 --memory=2048 --cert-expiration=3m --driver=hyperv: (5m43.0284524s)
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-635800 --memory=2048 --cert-expiration=8760h --driver=hyperv
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-635800 --memory=2048 --cert-expiration=8760h --driver=hyperv: (5m30.79058s)
helpers_test.go:175: Cleaning up "cert-expiration-635800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-635800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-635800: (48.5435507s)
--- PASS: TestCertExpiration (902.36s)

                                                
                                    
x
+
TestDockerFlags (434.92s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-616300 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-616300 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (6m7.2879182s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-616300 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-616300 ssh "sudo systemctl show docker --property=Environment --no-pager": (10.2153588s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-616300 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-616300 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (10.3518615s)
helpers_test.go:175: Cleaning up "docker-flags-616300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-616300
E0612 15:49:17.190570    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-616300: (47.0616886s)
--- PASS: TestDockerFlags (434.92s)

                                                
                                    
x
+
TestForceSystemdFlag (537.42s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-038300 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-038300 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (8m8.7133846s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-038300 ssh "docker info --format {{.CgroupDriver}}"
E0612 15:36:13.934119    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-038300 ssh "docker info --format {{.CgroupDriver}}": (9.6133476s)
helpers_test.go:175: Cleaning up "force-systemd-flag-038300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-038300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-038300: (39.0852496s)
--- PASS: TestForceSystemdFlag (537.42s)

                                                
                                    
x
+
TestForceSystemdEnv (417.26s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-837500 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-837500 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (5m58.7071965s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-837500 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-837500 ssh "docker info --format {{.CgroupDriver}}": (10.9868968s)
helpers_test.go:175: Cleaning up "force-systemd-env-837500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-837500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-837500: (47.5544621s)
--- PASS: TestForceSystemdEnv (417.26s)

                                                
                                    
x
+
TestErrorSpam/start (16.63s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-736400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-736400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 start --dry-run: (5.6220289s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-736400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-736400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 start --dry-run: (5.5103037s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-736400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 start --dry-run
E0612 13:12:35.881522    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-736400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 start --dry-run: (5.4786812s)
--- PASS: TestErrorSpam/start (16.63s)

                                                
                                    
x
+
TestErrorSpam/status (35.27s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-736400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-736400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 status: (12.0845164s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-736400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-736400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 status: (11.5420853s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-736400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-736400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 status: (11.6238052s)
--- PASS: TestErrorSpam/status (35.27s)

                                                
                                    
x
+
TestErrorSpam/pause (21.75s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-736400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-736400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 pause: (7.5287114s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-736400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-736400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 pause: (7.0877867s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-736400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-736400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 pause: (7.1075549s)
--- PASS: TestErrorSpam/pause (21.75s)

                                                
                                    
x
+
TestErrorSpam/unpause (22.02s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-736400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-736400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 unpause: (7.3914205s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-736400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-736400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 unpause: (7.3677509s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-736400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-736400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 unpause: (7.2458723s)
--- PASS: TestErrorSpam/unpause (22.02s)

                                                
                                    
x
+
TestErrorSpam/stop (55.43s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-736400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-736400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 stop: (34.4616292s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-736400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-736400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 stop: (10.7147498s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-736400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 stop
E0612 13:14:51.901313    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-736400 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-736400 stop: (10.2283632s)
--- PASS: TestErrorSpam/stop (55.43s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\1280\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (236.13s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-269100 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0612 13:15:19.731591    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-269100 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m56.1030805s)
--- PASS: TestFunctional/serial/StartWithProxy (236.13s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (124.1s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-269100 --alsologtostderr -v=8
E0612 13:19:51.897113    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-269100 --alsologtostderr -v=8: (2m4.0877102s)
functional_test.go:659: soft start took 2m4.0892172s for "functional-269100" cluster.
--- PASS: TestFunctional/serial/SoftStart (124.10s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.14s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-269100 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (25.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 cache add registry.k8s.io/pause:3.1: (8.8873513s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 cache add registry.k8s.io/pause:3.3: (8.5248138s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 cache add registry.k8s.io/pause:latest: (8.2565616s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (25.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (10.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-269100 C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2675952862\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-269100 C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2675952862\001: (2.3510039s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 cache add minikube-local-cache-test:functional-269100
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 cache add minikube-local-cache-test:functional-269100: (7.9505046s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 cache delete minikube-local-cache-test:functional-269100
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-269100
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (10.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (8.97s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 ssh sudo crictl images: (8.9727008s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (8.97s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (35.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 ssh sudo docker rmi registry.k8s.io/pause:latest: (9.0410903s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-269100 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (8.9884498s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0612 13:22:06.209034    8788 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 cache reload: (8.0962862s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (9.0914464s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (35.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.37s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 kubectl -- --context functional-269100 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.43s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (125.99s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-269100 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0612 13:24:51.901337    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-269100 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (2m5.9859863s)
functional_test.go:757: restart took 2m5.993373s for "functional-269100" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (125.99s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-269100 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.18s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (8.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 logs: (8.2731785s)
--- PASS: TestFunctional/serial/LogsCmd (8.28s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (10.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 logs --file C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialLogsFileCmd4066711723\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 logs --file C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialLogsFileCmd4066711723\001\logs.txt: (10.1970914s)
--- PASS: TestFunctional/serial/LogsFileCmd (10.21s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (20.42s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-269100 apply -f testdata\invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-269100
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-269100: exit status 115 (15.9596144s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://172.23.195.181:30610 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0612 13:25:34.001935    2092 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube_service_f513297bf07cd3fd942cead3a34f1b094af52476_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-269100 delete -f testdata\invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-269100 delete -f testdata\invalidsvc.yaml: (1.0752135s)
--- PASS: TestFunctional/serial/InvalidService (20.42s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (42.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 status: (13.983762s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (14.1347274s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 status -o json: (14.0300317s)
--- PASS: TestFunctional/parallel/StatusCmd (42.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (29.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-269100 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-269100 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-hcxrn" [20c0af68-dc9f-4300-85be-272119ab9863] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-hcxrn" [20c0af68-dc9f-4300-85be-272119ab9863] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.0141375s
functional_test.go:1645: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 service hello-node-connect --url
functional_test.go:1645: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 service hello-node-connect --url: (19.5244789s)
functional_test.go:1651: found endpoint for hello-node-connect: http://172.23.195.181:30483
functional_test.go:1671: http://172.23.195.181:30483: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-hcxrn

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.23.195.181:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.23.195.181:30483
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (29.06s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (40.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [a5945727-bd26-4c6e-8afe-1ae05bcd4944] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0145853s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-269100 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-269100 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-269100 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-269100 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8bb0a2e6-5a25-4013-8a70-39fe3322d65c] Pending
helpers_test.go:344: "sp-pod" [8bb0a2e6-5a25-4013-8a70-39fe3322d65c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8bb0a2e6-5a25-4013-8a70-39fe3322d65c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.0123843s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-269100 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-269100 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-269100 delete -f testdata/storage-provisioner/pod.yaml: (1.1377034s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-269100 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [02cc4b14-707e-4125-a9de-830ecf97da80] Pending
helpers_test.go:344: "sp-pod" [02cc4b14-707e-4125-a9de-830ecf97da80] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [02cc4b14-707e-4125-a9de-830ecf97da80] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.0126446s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-269100 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (40.19s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (20.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 ssh "echo hello"
functional_test.go:1721: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 ssh "echo hello": (10.147163s)
functional_test.go:1738: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 ssh "cat /etc/hostname": (10.4721182s)
--- PASS: TestFunctional/parallel/SSHCmd (20.63s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (60.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 cp testdata\cp-test.txt /home/docker/cp-test.txt: (8.8494788s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 ssh -n functional-269100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 ssh -n functional-269100 "sudo cat /home/docker/cp-test.txt": (11.0226222s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 cp functional-269100:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalparallelCpCmd1374865726\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 cp functional-269100:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalparallelCpCmd1374865726\001\cp-test.txt: (10.7850237s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 ssh -n functional-269100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 ssh -n functional-269100 "sudo cat /home/docker/cp-test.txt": (10.546257s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (8.1618603s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 ssh -n functional-269100 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 ssh -n functional-269100 "sudo cat /tmp/does/not/exist/cp-test.txt": (11.2170305s)
--- PASS: TestFunctional/parallel/CpCmd (60.60s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-269100 replace --force -f testdata\mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-hdznl" [84012751-cbc8-4689-8916-17d7b0772d5b] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
E0612 13:26:15.100871    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
helpers_test.go:344: "mysql-64454c8b5c-hdznl" [84012751-cbc8-4689-8916-17d7b0772d5b] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 50.0064813s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-269100 exec mysql-64454c8b5c-hdznl -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-269100 exec mysql-64454c8b5c-hdznl -- mysql -ppassword -e "show databases;": exit status 1 (351.8476ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-269100 exec mysql-64454c8b5c-hdznl -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-269100 exec mysql-64454c8b5c-hdznl -- mysql -ppassword -e "show databases;": exit status 1 (313.3161ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-269100 exec mysql-64454c8b5c-hdznl -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-269100 exec mysql-64454c8b5c-hdznl -- mysql -ppassword -e "show databases;": exit status 1 (320.8092ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-269100 exec mysql-64454c8b5c-hdznl -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-269100 exec mysql-64454c8b5c-hdznl -- mysql -ppassword -e "show databases;": exit status 1 (335.3908ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-269100 exec mysql-64454c8b5c-hdznl -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-269100 exec mysql-64454c8b5c-hdznl -- mysql -ppassword -e "show databases;": exit status 1 (338.4136ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-269100 exec mysql-64454c8b5c-hdznl -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (67.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (11.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1280/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 ssh "sudo cat /etc/test/nested/copy/1280/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 ssh "sudo cat /etc/test/nested/copy/1280/hosts": (11.0959735s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (11.10s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (64.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1280.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 ssh "sudo cat /etc/ssl/certs/1280.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 ssh "sudo cat /etc/ssl/certs/1280.pem": (11.179052s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1280.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 ssh "sudo cat /usr/share/ca-certificates/1280.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 ssh "sudo cat /usr/share/ca-certificates/1280.pem": (11.0622669s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 ssh "sudo cat /etc/ssl/certs/51391683.0": (10.6607483s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/12802.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 ssh "sudo cat /etc/ssl/certs/12802.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 ssh "sudo cat /etc/ssl/certs/12802.pem": (10.0815967s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/12802.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 ssh "sudo cat /usr/share/ca-certificates/12802.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 ssh "sudo cat /usr/share/ca-certificates/12802.pem": (11.4238931s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (9.9503904s)
--- PASS: TestFunctional/parallel/CertSync (64.38s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-269100 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (11.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-269100 ssh "sudo systemctl is-active crio": exit status 1 (11.387484s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0612 13:25:51.100608    2884 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (11.39s)

                                                
                                    
x
+
TestFunctional/parallel/License (3.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (3.2029386s)
--- PASS: TestFunctional/parallel/License (3.22s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (45.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-269100 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-269100"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-269100 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-269100": (30.8541391s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-269100 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-269100 docker-env | Invoke-Expression ; docker images": (15.0813601s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (45.96s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (3.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 update-context --alsologtostderr -v=2: (3.3651819s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (3.37s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 update-context --alsologtostderr -v=2: (2.8101358s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.82s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 update-context --alsologtostderr -v=2: (2.7466225s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (7.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 image ls --format short --alsologtostderr: (7.8426784s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-269100 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.1
registry.k8s.io/kube-proxy:v1.30.1
registry.k8s.io/kube-controller-manager:v1.30.1
registry.k8s.io/kube-apiserver:v1.30.1
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-269100
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-269100
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-269100 image ls --format short --alsologtostderr:
W0612 13:29:04.799173    6488 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0612 13:29:04.809114    6488 out.go:291] Setting OutFile to fd 1316 ...
I0612 13:29:04.815017    6488 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0612 13:29:04.815017    6488 out.go:304] Setting ErrFile to fd 1312...
I0612 13:29:04.815017    6488 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0612 13:29:04.838702    6488 config.go:182] Loaded profile config "functional-269100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0612 13:29:04.838783    6488 config.go:182] Loaded profile config "functional-269100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0612 13:29:04.840191    6488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-269100 ).state
I0612 13:29:07.169178    6488 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0612 13:29:07.169178    6488 main.go:141] libmachine: [stderr =====>] : 
I0612 13:29:07.189332    6488 ssh_runner.go:195] Run: systemctl --version
I0612 13:29:07.189332    6488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-269100 ).state
I0612 13:29:09.582426    6488 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0612 13:29:09.582426    6488 main.go:141] libmachine: [stderr =====>] : 
I0612 13:29:09.582699    6488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-269100 ).networkadapters[0]).ipaddresses[0]
I0612 13:29:12.344680    6488 main.go:141] libmachine: [stdout =====>] : 172.23.195.181

                                                
                                                
I0612 13:29:12.344767    6488 main.go:141] libmachine: [stderr =====>] : 
I0612 13:29:12.344823    6488 sshutil.go:53] new ssh client: &{IP:172.23.195.181 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-269100\id_rsa Username:docker}
I0612 13:29:12.457118    6488 ssh_runner.go:235] Completed: systemctl --version: (5.2677699s)
I0612 13:29:12.470908    6488 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (7.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (7.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 image ls --format table --alsologtostderr: (7.291652s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-269100 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager     | v1.30.1           | 25a1387cdab82 | 111MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| docker.io/library/nginx                     | alpine            | 70ea0d8cc5300 | 48.3MB |
| docker.io/library/nginx                     | latest            | 4f67c83422ec7 | 188MB  |
| registry.k8s.io/kube-scheduler              | v1.30.1           | a52dc94f0a912 | 62MB   |
| gcr.io/google-containers/addon-resizer      | functional-269100 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-proxy                  | v1.30.1           | 747097150317f | 84.7MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/minikube-local-cache-test | functional-269100 | 368a7393cd972 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.30.1           | 91be940803172 | 117MB  |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-269100 image ls --format table --alsologtostderr:
W0612 13:29:19.321789   13904 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0612 13:29:19.332111   13904 out.go:291] Setting OutFile to fd 1224 ...
I0612 13:29:19.338631   13904 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0612 13:29:19.338631   13904 out.go:304] Setting ErrFile to fd 1196...
I0612 13:29:19.338631   13904 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0612 13:29:19.358199   13904 config.go:182] Loaded profile config "functional-269100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0612 13:29:19.359168   13904 config.go:182] Loaded profile config "functional-269100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0612 13:29:19.359548   13904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-269100 ).state
I0612 13:29:21.584866   13904 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0612 13:29:21.584866   13904 main.go:141] libmachine: [stderr =====>] : 
I0612 13:29:21.596763   13904 ssh_runner.go:195] Run: systemctl --version
I0612 13:29:21.596763   13904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-269100 ).state
I0612 13:29:23.742376   13904 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0612 13:29:23.754131   13904 main.go:141] libmachine: [stderr =====>] : 
I0612 13:29:23.754216   13904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-269100 ).networkadapters[0]).ipaddresses[0]
I0612 13:29:26.298820   13904 main.go:141] libmachine: [stdout =====>] : 172.23.195.181

                                                
                                                
I0612 13:29:26.298984   13904 main.go:141] libmachine: [stderr =====>] : 
I0612 13:29:26.299032   13904 sshutil.go:53] new ssh client: &{IP:172.23.195.181 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-269100\id_rsa Username:docker}
I0612 13:29:26.444461   13904 ssh_runner.go:235] Completed: systemctl --version: (4.8476837s)
I0612 13:29:26.453819   13904 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (7.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (7.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 image ls --format json --alsologtostderr: (7.9366187s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-269100 image ls --format json --alsologtostderr:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"48300000"},{"id":"747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.1"],"size":"84700000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"368a7393cd972005e04065e5a08eb67119c7be9076b2cb02fdaf4301b184487a","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-269100"],"size":"30"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d44498
7919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-269100"],"size":"32900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"4f67c83422ec747235357c04556616234e66fc3fa39cb4f40b2d4441ddd8f100","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.1"],"size":"117000000"},{"id":"25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-mana
ger:v1.30.1"],"size":"111000000"},{"id":"a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.1"],"size":"62000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-269100 image ls --format json --alsologtostderr:
W0612 13:29:12.664361    2112 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0612 13:29:12.675878    2112 out.go:291] Setting OutFile to fd 1564 ...
I0612 13:29:12.676345    2112 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0612 13:29:12.676345    2112 out.go:304] Setting ErrFile to fd 1568...
I0612 13:29:12.676345    2112 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0612 13:29:12.692570    2112 config.go:182] Loaded profile config "functional-269100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0612 13:29:12.692570    2112 config.go:182] Loaded profile config "functional-269100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0612 13:29:12.698888    2112 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-269100 ).state
I0612 13:29:15.075059    2112 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0612 13:29:15.087864    2112 main.go:141] libmachine: [stderr =====>] : 
I0612 13:29:15.099937    2112 ssh_runner.go:195] Run: systemctl --version
I0612 13:29:15.099937    2112 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-269100 ).state
I0612 13:29:17.502222    2112 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0612 13:29:17.508040    2112 main.go:141] libmachine: [stderr =====>] : 
I0612 13:29:17.508040    2112 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-269100 ).networkadapters[0]).ipaddresses[0]
I0612 13:29:20.290564    2112 main.go:141] libmachine: [stdout =====>] : 172.23.195.181

                                                
                                                
I0612 13:29:20.290646    2112 main.go:141] libmachine: [stderr =====>] : 
I0612 13:29:20.290646    2112 sshutil.go:53] new ssh client: &{IP:172.23.195.181 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-269100\id_rsa Username:docker}
I0612 13:29:20.399713    2112 ssh_runner.go:235] Completed: systemctl --version: (5.2997596s)
I0612 13:29:20.415389    2112 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (7.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (8.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 image ls --format yaml --alsologtostderr: (8.0025974s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-269100 image ls --format yaml --alsologtostderr:
- id: 368a7393cd972005e04065e5a08eb67119c7be9076b2cb02fdaf4301b184487a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-269100
size: "30"
- id: a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.1
size: "62000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 4f67c83422ec747235357c04556616234e66fc3fa39cb4f40b2d4441ddd8f100
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.1
size: "111000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "48300000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.1
size: "117000000"
- id: 747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.1
size: "84700000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-269100
size: "32900000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-269100 image ls --format yaml --alsologtostderr:
W0612 13:29:11.318748   15356 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0612 13:29:11.320165   15356 out.go:291] Setting OutFile to fd 1168 ...
I0612 13:29:11.342629   15356 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0612 13:29:11.342697   15356 out.go:304] Setting ErrFile to fd 1464...
I0612 13:29:11.342697   15356 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0612 13:29:11.358008   15356 config.go:182] Loaded profile config "functional-269100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0612 13:29:11.359481   15356 config.go:182] Loaded profile config "functional-269100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0612 13:29:11.360141   15356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-269100 ).state
I0612 13:29:13.890444   15356 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0612 13:29:13.890550   15356 main.go:141] libmachine: [stderr =====>] : 
I0612 13:29:13.906141   15356 ssh_runner.go:195] Run: systemctl --version
I0612 13:29:13.906141   15356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-269100 ).state
I0612 13:29:16.315480   15356 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0612 13:29:16.315480   15356 main.go:141] libmachine: [stderr =====>] : 
I0612 13:29:16.315639   15356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-269100 ).networkadapters[0]).ipaddresses[0]
I0612 13:29:19.043389   15356 main.go:141] libmachine: [stdout =====>] : 172.23.195.181

                                                
                                                
I0612 13:29:19.043448   15356 main.go:141] libmachine: [stderr =====>] : 
I0612 13:29:19.043448   15356 sshutil.go:53] new ssh client: &{IP:172.23.195.181 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-269100\id_rsa Username:docker}
I0612 13:29:19.136527   15356 ssh_runner.go:235] Completed: systemctl --version: (5.23037s)
I0612 13:29:19.146917   15356 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (8.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (27.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-269100 ssh pgrep buildkitd: exit status 1 (10.2129891s)

                                                
                                                
** stderr ** 
	W0612 13:29:07.455560    6204 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 image build -t localhost/my-image:functional-269100 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 image build -t localhost/my-image:functional-269100 testdata\build --alsologtostderr: (9.7463484s)
functional_test.go:319: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-269100 image build -t localhost/my-image:functional-269100 testdata\build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 638af1497e8f
---> Removed intermediate container 638af1497e8f
---> 14d69b3f3088
Step 3/3 : ADD content.txt /
---> b5533551d64a
Successfully built b5533551d64a
Successfully tagged localhost/my-image:functional-269100
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-269100 image build -t localhost/my-image:functional-269100 testdata\build --alsologtostderr:
W0612 13:29:17.644413   13556 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0612 13:29:17.654980   13556 out.go:291] Setting OutFile to fd 1192 ...
I0612 13:29:17.676231   13556 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0612 13:29:17.676231   13556 out.go:304] Setting ErrFile to fd 1240...
I0612 13:29:17.676353   13556 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0612 13:29:17.697290   13556 config.go:182] Loaded profile config "functional-269100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0612 13:29:17.714863   13556 config.go:182] Loaded profile config "functional-269100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0612 13:29:17.716299   13556 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-269100 ).state
I0612 13:29:20.011001   13556 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0612 13:29:20.011001   13556 main.go:141] libmachine: [stderr =====>] : 
I0612 13:29:20.033508   13556 ssh_runner.go:195] Run: systemctl --version
I0612 13:29:20.033508   13556 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-269100 ).state
I0612 13:29:22.249104   13556 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0612 13:29:22.249104   13556 main.go:141] libmachine: [stderr =====>] : 
I0612 13:29:22.249229   13556 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-269100 ).networkadapters[0]).ipaddresses[0]
I0612 13:29:24.797670   13556 main.go:141] libmachine: [stdout =====>] : 172.23.195.181

                                                
                                                
I0612 13:29:24.797670   13556 main.go:141] libmachine: [stderr =====>] : 
I0612 13:29:24.809843   13556 sshutil.go:53] new ssh client: &{IP:172.23.195.181 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-269100\id_rsa Username:docker}
I0612 13:29:24.910533   13556 ssh_runner.go:235] Completed: systemctl --version: (4.877011s)
I0612 13:29:24.910533   13556 build_images.go:161] Building image from path: C:\Users\jenkins.minikube1\AppData\Local\Temp\build.2364456423.tar
I0612 13:29:24.923368   13556 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0612 13:29:24.954847   13556 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2364456423.tar
I0612 13:29:24.961752   13556 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2364456423.tar: stat -c "%s %y" /var/lib/minikube/build/build.2364456423.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2364456423.tar': No such file or directory
I0612 13:29:24.962283   13556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\AppData\Local\Temp\build.2364456423.tar --> /var/lib/minikube/build/build.2364456423.tar (3072 bytes)
I0612 13:29:25.022278   13556 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2364456423
I0612 13:29:25.054919   13556 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2364456423 -xf /var/lib/minikube/build/build.2364456423.tar
I0612 13:29:25.085200   13556 docker.go:360] Building image: /var/lib/minikube/build/build.2364456423
I0612 13:29:25.095503   13556 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-269100 /var/lib/minikube/build/build.2364456423
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0612 13:29:27.196694   13556 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-269100 /var/lib/minikube/build/build.2364456423: (2.1011849s)
I0612 13:29:27.208528   13556 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2364456423
I0612 13:29:27.242221   13556 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2364456423.tar
I0612 13:29:27.261542   13556 build_images.go:217] Built localhost/my-image:functional-269100 from C:\Users\jenkins.minikube1\AppData\Local\Temp\build.2364456423.tar
I0612 13:29:27.261542   13556 build_images.go:133] succeeded building to: functional-269100
I0612 13:29:27.261542   13556 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 image ls: (7.0674068s)
E0612 13:29:51.902075    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (27.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (6.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (5.6981597s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-269100
--- PASS: TestFunctional/parallel/ImageCommands/Setup (6.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (23.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 image load --daemon gcr.io/google-containers/addon-resizer:functional-269100 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 image load --daemon gcr.io/google-containers/addon-resizer:functional-269100 --alsologtostderr: (15.7038531s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 image ls: (7.7389627s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (23.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (21.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-269100 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-269100 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-5krrc" [49bf30e8-34b6-4dbd-8472-048b8c509038] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-5krrc" [49bf30e8-34b6-4dbd-8472-048b8c509038] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 21.0243034s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (21.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (20.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 image load --daemon gcr.io/google-containers/addon-resizer:functional-269100 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 image load --daemon gcr.io/google-containers/addon-resizer:functional-269100 --alsologtostderr: (12.3350067s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 image ls: (8.6381211s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (20.98s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (14.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 service list
functional_test.go:1455: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 service list: (14.2835341s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (14.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (8.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-269100 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-269100 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-269100 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1836: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 7164: TerminateProcess: Access is denied.
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-269100 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (8.89s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-269100 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (30.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-269100 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [46c95bd1-330d-473c-b07a-f3150ca22a80] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [46c95bd1-330d-473c-b07a-f3150ca22a80] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 30.0192266s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (30.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (14.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 service list -o json: (14.9553213s)
functional_test.go:1490: Took "14.9554438s" to run "out/minikube-windows-amd64.exe -p functional-269100 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (14.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (27.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (4.7831721s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-269100
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 image load --daemon gcr.io/google-containers/addon-resizer:functional-269100 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 image load --daemon gcr.io/google-containers/addon-resizer:functional-269100 --alsologtostderr: (14.9347219s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 image ls: (7.483172s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (27.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (9.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 image save gcr.io/google-containers/addon-resizer:functional-269100 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 image save gcr.io/google-containers/addon-resizer:functional-269100 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (9.7434374s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (9.74s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-269100 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 13224: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (16.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 image rm gcr.io/google-containers/addon-resizer:functional-269100 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 image rm gcr.io/google-containers/addon-resizer:functional-269100 --alsologtostderr: (8.4686155s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 image ls: (8.3464093s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (16.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (19.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (11.09131s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 image ls: (8.3522267s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (19.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (11.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (11.3996445s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (11.89s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (11.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (11.2825235s)
functional_test.go:1311: Took "11.2848186s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "176.1772ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (11.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (10.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-269100
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 image save --daemon gcr.io/google-containers/addon-resizer:functional-269100 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 image save --daemon gcr.io/google-containers/addon-resizer:functional-269100 --alsologtostderr: (10.1546361s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-269100
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (10.72s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (11.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (11.6659928s)
functional_test.go:1362: Took "11.6660404s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "201.1346ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (11.89s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 version --short
--- PASS: TestFunctional/parallel/Version/short (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (8.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-269100 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-269100 version -o=json --components: (8.5208312s)
--- PASS: TestFunctional/parallel/Version/components (8.53s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.45s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-269100
--- PASS: TestFunctional/delete_addon-resizer_images (0.45s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-269100
--- PASS: TestFunctional/delete_my-image_image (0.17s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-269100
--- PASS: TestFunctional/delete_minikube_cached_images (0.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (718.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-957600 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0612 13:36:13.915654    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
E0612 13:36:13.930426    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
E0612 13:36:13.946421    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
E0612 13:36:13.977981    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
E0612 13:36:14.024702    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
E0612 13:36:14.119015    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
E0612 13:36:14.293361    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
E0612 13:36:14.628042    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
E0612 13:36:15.280050    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
E0612 13:36:16.564571    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
E0612 13:36:19.129846    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
E0612 13:36:24.255436    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
E0612 13:36:34.500360    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
E0612 13:36:54.989432    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
E0612 13:37:35.959549    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
E0612 13:38:57.892239    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
E0612 13:39:51.909353    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
E0612 13:41:13.916340    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
E0612 13:41:41.742732    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
E0612 13:42:55.104674    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
E0612 13:44:51.900332    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
E0612 13:46:13.917901    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-957600 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (11m21.4713592s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 status -v=7 --alsologtostderr: (36.9290503s)
--- PASS: TestMultiControlPlane/serial/StartCluster (718.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (12.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-957600 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-957600 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-957600 -- rollout status deployment/busybox: (4.6770719s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-957600 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-957600 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-957600 -- exec busybox-fc5497c4f-q7zbt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-957600 -- exec busybox-fc5497c4f-q7zbt -- nslookup kubernetes.io: (1.7120972s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-957600 -- exec busybox-fc5497c4f-qhrx6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-957600 -- exec busybox-fc5497c4f-sfrgv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-957600 -- exec busybox-fc5497c4f-sfrgv -- nslookup kubernetes.io: (1.5915677s)
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-957600 -- exec busybox-fc5497c4f-q7zbt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-957600 -- exec busybox-fc5497c4f-qhrx6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-957600 -- exec busybox-fc5497c4f-sfrgv -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-957600 -- exec busybox-fc5497c4f-q7zbt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-957600 -- exec busybox-fc5497c4f-qhrx6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-957600 -- exec busybox-fc5497c4f-sfrgv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (12.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (258.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-957600 -v=7 --alsologtostderr
E0612 13:49:51.906890    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
E0612 13:51:13.915292    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-957600 -v=7 --alsologtostderr: (3m29.2990511s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 status -v=7 --alsologtostderr: (49.2337143s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (258.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-957600 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (29.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
E0612 13:52:37.118082    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (29.1515007s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (29.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (642.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 status --output json -v=7 --alsologtostderr: (49.3329122s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 cp testdata\cp-test.txt ha-957600:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 cp testdata\cp-test.txt ha-957600:/home/docker/cp-test.txt: (9.8607554s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600 "sudo cat /home/docker/cp-test.txt": (9.6977408s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 cp ha-957600:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3720701902\001\cp-test_ha-957600.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 cp ha-957600:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3720701902\001\cp-test_ha-957600.txt: (9.7516983s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600 "sudo cat /home/docker/cp-test.txt": (9.7705784s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 cp ha-957600:/home/docker/cp-test.txt ha-957600-m02:/home/docker/cp-test_ha-957600_ha-957600-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 cp ha-957600:/home/docker/cp-test.txt ha-957600-m02:/home/docker/cp-test_ha-957600_ha-957600-m02.txt: (16.9915699s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600 "sudo cat /home/docker/cp-test.txt"
E0612 13:54:51.900335    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600 "sudo cat /home/docker/cp-test.txt": (9.7158249s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m02 "sudo cat /home/docker/cp-test_ha-957600_ha-957600-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m02 "sudo cat /home/docker/cp-test_ha-957600_ha-957600-m02.txt": (9.646926s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 cp ha-957600:/home/docker/cp-test.txt ha-957600-m03:/home/docker/cp-test_ha-957600_ha-957600-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 cp ha-957600:/home/docker/cp-test.txt ha-957600-m03:/home/docker/cp-test_ha-957600_ha-957600-m03.txt: (17.2439255s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600 "sudo cat /home/docker/cp-test.txt": (9.7846699s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m03 "sudo cat /home/docker/cp-test_ha-957600_ha-957600-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m03 "sudo cat /home/docker/cp-test_ha-957600_ha-957600-m03.txt": (9.6845023s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 cp ha-957600:/home/docker/cp-test.txt ha-957600-m04:/home/docker/cp-test_ha-957600_ha-957600-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 cp ha-957600:/home/docker/cp-test.txt ha-957600-m04:/home/docker/cp-test_ha-957600_ha-957600-m04.txt: (16.8921675s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600 "sudo cat /home/docker/cp-test.txt": (9.7022131s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m04 "sudo cat /home/docker/cp-test_ha-957600_ha-957600-m04.txt"
E0612 13:56:13.919078    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m04 "sudo cat /home/docker/cp-test_ha-957600_ha-957600-m04.txt": (9.8096568s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 cp testdata\cp-test.txt ha-957600-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 cp testdata\cp-test.txt ha-957600-m02:/home/docker/cp-test.txt: (9.6877048s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m02 "sudo cat /home/docker/cp-test.txt": (9.6427704s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 cp ha-957600-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3720701902\001\cp-test_ha-957600-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 cp ha-957600-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3720701902\001\cp-test_ha-957600-m02.txt: (9.6197887s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m02 "sudo cat /home/docker/cp-test.txt": (9.5922087s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 cp ha-957600-m02:/home/docker/cp-test.txt ha-957600:/home/docker/cp-test_ha-957600-m02_ha-957600.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 cp ha-957600-m02:/home/docker/cp-test.txt ha-957600:/home/docker/cp-test_ha-957600-m02_ha-957600.txt: (17.133263s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m02 "sudo cat /home/docker/cp-test.txt": (9.6795681s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600 "sudo cat /home/docker/cp-test_ha-957600-m02_ha-957600.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600 "sudo cat /home/docker/cp-test_ha-957600-m02_ha-957600.txt": (9.7057557s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 cp ha-957600-m02:/home/docker/cp-test.txt ha-957600-m03:/home/docker/cp-test_ha-957600-m02_ha-957600-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 cp ha-957600-m02:/home/docker/cp-test.txt ha-957600-m03:/home/docker/cp-test_ha-957600-m02_ha-957600-m03.txt: (16.8552346s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m02 "sudo cat /home/docker/cp-test.txt": (9.6480309s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m03 "sudo cat /home/docker/cp-test_ha-957600-m02_ha-957600-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m03 "sudo cat /home/docker/cp-test_ha-957600-m02_ha-957600-m03.txt": (9.7762589s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 cp ha-957600-m02:/home/docker/cp-test.txt ha-957600-m04:/home/docker/cp-test_ha-957600-m02_ha-957600-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 cp ha-957600-m02:/home/docker/cp-test.txt ha-957600-m04:/home/docker/cp-test_ha-957600-m02_ha-957600-m04.txt: (17.0313815s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m02 "sudo cat /home/docker/cp-test.txt": (9.6844973s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m04 "sudo cat /home/docker/cp-test_ha-957600-m02_ha-957600-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m04 "sudo cat /home/docker/cp-test_ha-957600-m02_ha-957600-m04.txt": (9.6822352s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 cp testdata\cp-test.txt ha-957600-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 cp testdata\cp-test.txt ha-957600-m03:/home/docker/cp-test.txt: (9.5451184s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m03 "sudo cat /home/docker/cp-test.txt": (9.4997767s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 cp ha-957600-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3720701902\001\cp-test_ha-957600-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 cp ha-957600-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3720701902\001\cp-test_ha-957600-m03.txt: (9.8308742s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m03 "sudo cat /home/docker/cp-test.txt": (9.6782746s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 cp ha-957600-m03:/home/docker/cp-test.txt ha-957600:/home/docker/cp-test_ha-957600-m03_ha-957600.txt
E0612 13:59:35.120354    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 cp ha-957600-m03:/home/docker/cp-test.txt ha-957600:/home/docker/cp-test_ha-957600-m03_ha-957600.txt: (16.9053092s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m03 "sudo cat /home/docker/cp-test.txt"
E0612 13:59:51.906078    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m03 "sudo cat /home/docker/cp-test.txt": (9.6862396s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600 "sudo cat /home/docker/cp-test_ha-957600-m03_ha-957600.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600 "sudo cat /home/docker/cp-test_ha-957600-m03_ha-957600.txt": (9.7924143s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 cp ha-957600-m03:/home/docker/cp-test.txt ha-957600-m02:/home/docker/cp-test_ha-957600-m03_ha-957600-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 cp ha-957600-m03:/home/docker/cp-test.txt ha-957600-m02:/home/docker/cp-test_ha-957600-m03_ha-957600-m02.txt: (17.1321926s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m03 "sudo cat /home/docker/cp-test.txt": (9.6848167s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m02 "sudo cat /home/docker/cp-test_ha-957600-m03_ha-957600-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m02 "sudo cat /home/docker/cp-test_ha-957600-m03_ha-957600-m02.txt": (9.8882564s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 cp ha-957600-m03:/home/docker/cp-test.txt ha-957600-m04:/home/docker/cp-test_ha-957600-m03_ha-957600-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 cp ha-957600-m03:/home/docker/cp-test.txt ha-957600-m04:/home/docker/cp-test_ha-957600-m03_ha-957600-m04.txt: (17.1941487s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m03 "sudo cat /home/docker/cp-test.txt": (9.7856715s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m04 "sudo cat /home/docker/cp-test_ha-957600-m03_ha-957600-m04.txt"
E0612 14:01:13.926609    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m04 "sudo cat /home/docker/cp-test_ha-957600-m03_ha-957600-m04.txt": (9.9783491s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 cp testdata\cp-test.txt ha-957600-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 cp testdata\cp-test.txt ha-957600-m04:/home/docker/cp-test.txt: (10.0495423s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m04 "sudo cat /home/docker/cp-test.txt": (9.8283125s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 cp ha-957600-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3720701902\001\cp-test_ha-957600-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 cp ha-957600-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3720701902\001\cp-test_ha-957600-m04.txt: (9.6555226s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m04 "sudo cat /home/docker/cp-test.txt": (9.6976507s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 cp ha-957600-m04:/home/docker/cp-test.txt ha-957600:/home/docker/cp-test_ha-957600-m04_ha-957600.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 cp ha-957600-m04:/home/docker/cp-test.txt ha-957600:/home/docker/cp-test_ha-957600-m04_ha-957600.txt: (17.0090479s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m04 "sudo cat /home/docker/cp-test.txt": (9.6867061s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600 "sudo cat /home/docker/cp-test_ha-957600-m04_ha-957600.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600 "sudo cat /home/docker/cp-test_ha-957600-m04_ha-957600.txt": (9.6547535s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 cp ha-957600-m04:/home/docker/cp-test.txt ha-957600-m02:/home/docker/cp-test_ha-957600-m04_ha-957600-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 cp ha-957600-m04:/home/docker/cp-test.txt ha-957600-m02:/home/docker/cp-test_ha-957600-m04_ha-957600-m02.txt: (16.9171691s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m04 "sudo cat /home/docker/cp-test.txt": (9.6957886s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m02 "sudo cat /home/docker/cp-test_ha-957600-m04_ha-957600-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m02 "sudo cat /home/docker/cp-test_ha-957600-m04_ha-957600-m02.txt": (9.6985857s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 cp ha-957600-m04:/home/docker/cp-test.txt ha-957600-m03:/home/docker/cp-test_ha-957600-m04_ha-957600-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 cp ha-957600-m04:/home/docker/cp-test.txt ha-957600-m03:/home/docker/cp-test_ha-957600-m04_ha-957600-m03.txt: (17.2728234s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m04 "sudo cat /home/docker/cp-test.txt": (9.6478283s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m03 "sudo cat /home/docker/cp-test_ha-957600-m04_ha-957600-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-957600 ssh -n ha-957600-m03 "sudo cat /home/docker/cp-test_ha-957600-m04_ha-957600-m03.txt": (9.6867025s)
--- PASS: TestMultiControlPlane/serial/CopyFile (642.76s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (191.34s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-997200 --driver=hyperv
E0612 14:09:17.136142    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
E0612 14:09:51.914447    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
E0612 14:11:13.920782    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-997200 --driver=hyperv: (3m11.3257005s)
--- PASS: TestImageBuild/serial/Setup (191.34s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (9.39s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-997200
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-997200: (9.3894937s)
--- PASS: TestImageBuild/serial/NormalBuild (9.39s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (8.76s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-997200
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-997200: (8.7528598s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (8.76s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (7.51s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-997200
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-997200: (7.5066153s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (7.51s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.21s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-997200
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-997200: (7.2058946s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.21s)

                                                
                                    
x
+
TestJSONOutput/start/Command (203.82s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-507600 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0612 14:14:51.916277    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-507600 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m23.8142971s)
--- PASS: TestJSONOutput/start/Command (203.82s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (7.48s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-507600 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-507600 --output=json --user=testUser: (7.4845578s)
--- PASS: TestJSONOutput/pause/Command (7.48s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (7.41s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-507600 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-507600 --output=json --user=testUser: (7.4142097s)
--- PASS: TestJSONOutput/unpause/Command (7.41s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (39.75s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-507600 --output=json --user=testUser
E0612 14:16:13.920311    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
E0612 14:16:15.138251    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-507600 --output=json --user=testUser: (39.7527552s)
--- PASS: TestJSONOutput/stop/Command (39.75s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.28s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-094600 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-094600 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (197.7672ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f0d17dfd-7785-4812-922a-721136f59afd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-094600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9c8c25eb-afe1-48c2-ad11-5d7443713782","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube1\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"8f16d492-8ee7-4ab0-b959-26196c48e515","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"64611192-144e-4721-bb66-78992ac0ccdc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"fec5cfc8-f4ba-4c22-97b6-519cd073e6c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19044"}}
	{"specversion":"1.0","id":"a0e27e71-819f-4239-8154-cc87693388c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0e795c69-10b1-4641-b1b0-195c5dfd1187","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0612 14:17:05.048161    6448 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-094600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-094600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-094600: (1.0849751s)
--- PASS: TestErrorJSONOutput (1.28s)

                                                
                                    
x
+
TestMainNoArgs (0.16s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.16s)

                                                
                                    
x
+
TestMinikubeProfile (522.66s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-410300 --driver=hyperv
E0612 14:19:51.909054    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-410300 --driver=hyperv: (3m16.8913319s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-410300 --driver=hyperv
E0612 14:21:13.927517    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-410300 --driver=hyperv: (3m19.4756165s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-410300
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (19.1239867s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-410300
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (19.0683307s)
helpers_test.go:175: Cleaning up "second-410300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-410300
E0612 14:24:51.908796    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-410300: (40.9428662s)
helpers_test.go:175: Cleaning up "first-410300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-410300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-410300: (46.4529119s)
--- PASS: TestMinikubeProfile (522.66s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (155.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-443500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0612 14:25:57.151233    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
E0612 14:26:13.919721    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-443500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m34.0343239s)
--- PASS: TestMountStart/serial/StartWithMountFirst (155.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (9.56s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-443500 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-443500 ssh -- ls /minikube-host: (9.5573009s)
--- PASS: TestMountStart/serial/VerifyMountFirst (9.56s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (156.35s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-443500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0612 14:29:51.910860    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-443500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m35.3348634s)
--- PASS: TestMountStart/serial/StartWithMountSecond (156.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (9.46s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-443500 ssh -- ls /minikube-host
E0612 14:31:13.924844    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-443500 ssh -- ls /minikube-host: (9.4573234s)
--- PASS: TestMountStart/serial/VerifyMountSecond (9.46s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (26.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-443500 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-443500 --alsologtostderr -v=5: (26.7109722s)
--- PASS: TestMountStart/serial/DeleteFirst (26.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (8.81s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-443500 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-443500 ssh -- ls /minikube-host: (8.8013108s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (8.81s)

                                                
                                    
x
+
TestMountStart/serial/Stop (29.01s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-443500
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-443500: (29.0064212s)
--- PASS: TestMountStart/serial/Stop (29.01s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (412.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-025000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0612 14:39:51.912114    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
E0612 14:41:13.926503    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
E0612 14:42:37.156407    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-025000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (6m28.9343665s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 status --alsologtostderr: (23.435597s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (412.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (8.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-025000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-025000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-025000 -- rollout status deployment/busybox: (3.2711248s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-025000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-025000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-025000 -- exec busybox-fc5497c4f-45qqd -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-025000 -- exec busybox-fc5497c4f-45qqd -- nslookup kubernetes.io: (1.6702603s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-025000 -- exec busybox-fc5497c4f-9bsls -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-025000 -- exec busybox-fc5497c4f-45qqd -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-025000 -- exec busybox-fc5497c4f-9bsls -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-025000 -- exec busybox-fc5497c4f-45qqd -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-025000 -- exec busybox-fc5497c4f-9bsls -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (8.41s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (225.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-025000 -v 3 --alsologtostderr
E0612 14:44:51.912891    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
E0612 14:46:13.924163    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-025000 -v 3 --alsologtostderr: (3m9.8426852s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 status --alsologtostderr: (35.7986921s)
--- PASS: TestMultiNode/serial/AddNode (225.64s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-025000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (9.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (9.8030768s)
--- PASS: TestMultiNode/serial/ProfileList (9.80s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (359.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 status --output json --alsologtostderr: (35.8542115s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 cp testdata\cp-test.txt multinode-025000:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 cp testdata\cp-test.txt multinode-025000:/home/docker/cp-test.txt: (9.4957378s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000 "sudo cat /home/docker/cp-test.txt": (9.5426918s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 cp multinode-025000:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile283731824\001\cp-test_multinode-025000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 cp multinode-025000:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile283731824\001\cp-test_multinode-025000.txt: (9.5019248s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000 "sudo cat /home/docker/cp-test.txt"
E0612 14:49:35.156327    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000 "sudo cat /home/docker/cp-test.txt": (9.4151263s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 cp multinode-025000:/home/docker/cp-test.txt multinode-025000-m02:/home/docker/cp-test_multinode-025000_multinode-025000-m02.txt
E0612 14:49:51.914606    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 cp multinode-025000:/home/docker/cp-test.txt multinode-025000-m02:/home/docker/cp-test_multinode-025000_multinode-025000-m02.txt: (16.4002529s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000 "sudo cat /home/docker/cp-test.txt": (9.2780849s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000-m02 "sudo cat /home/docker/cp-test_multinode-025000_multinode-025000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000-m02 "sudo cat /home/docker/cp-test_multinode-025000_multinode-025000-m02.txt": (9.2874697s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 cp multinode-025000:/home/docker/cp-test.txt multinode-025000-m03:/home/docker/cp-test_multinode-025000_multinode-025000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 cp multinode-025000:/home/docker/cp-test.txt multinode-025000-m03:/home/docker/cp-test_multinode-025000_multinode-025000-m03.txt: (16.4637403s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000 "sudo cat /home/docker/cp-test.txt": (9.3656767s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000-m03 "sudo cat /home/docker/cp-test_multinode-025000_multinode-025000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000-m03 "sudo cat /home/docker/cp-test_multinode-025000_multinode-025000-m03.txt": (9.386304s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 cp testdata\cp-test.txt multinode-025000-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 cp testdata\cp-test.txt multinode-025000-m02:/home/docker/cp-test.txt: (9.2947093s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000-m02 "sudo cat /home/docker/cp-test.txt": (9.2729431s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 cp multinode-025000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile283731824\001\cp-test_multinode-025000-m02.txt
E0612 14:51:13.925932    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 cp multinode-025000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile283731824\001\cp-test_multinode-025000-m02.txt: (9.4664461s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000-m02 "sudo cat /home/docker/cp-test.txt": (9.4298302s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 cp multinode-025000-m02:/home/docker/cp-test.txt multinode-025000:/home/docker/cp-test_multinode-025000-m02_multinode-025000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 cp multinode-025000-m02:/home/docker/cp-test.txt multinode-025000:/home/docker/cp-test_multinode-025000-m02_multinode-025000.txt: (16.4142324s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000-m02 "sudo cat /home/docker/cp-test.txt": (9.3365807s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000 "sudo cat /home/docker/cp-test_multinode-025000-m02_multinode-025000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000 "sudo cat /home/docker/cp-test_multinode-025000-m02_multinode-025000.txt": (9.3730784s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 cp multinode-025000-m02:/home/docker/cp-test.txt multinode-025000-m03:/home/docker/cp-test_multinode-025000-m02_multinode-025000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 cp multinode-025000-m02:/home/docker/cp-test.txt multinode-025000-m03:/home/docker/cp-test_multinode-025000-m02_multinode-025000-m03.txt: (16.485075s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000-m02 "sudo cat /home/docker/cp-test.txt": (9.390571s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000-m03 "sudo cat /home/docker/cp-test_multinode-025000-m02_multinode-025000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000-m03 "sudo cat /home/docker/cp-test_multinode-025000-m02_multinode-025000-m03.txt": (9.4606379s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 cp testdata\cp-test.txt multinode-025000-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 cp testdata\cp-test.txt multinode-025000-m03:/home/docker/cp-test.txt: (9.5336399s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000-m03 "sudo cat /home/docker/cp-test.txt": (9.3296523s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 cp multinode-025000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile283731824\001\cp-test_multinode-025000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 cp multinode-025000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile283731824\001\cp-test_multinode-025000-m03.txt: (9.2980119s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000-m03 "sudo cat /home/docker/cp-test.txt": (9.3147706s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 cp multinode-025000-m03:/home/docker/cp-test.txt multinode-025000:/home/docker/cp-test_multinode-025000-m03_multinode-025000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 cp multinode-025000-m03:/home/docker/cp-test.txt multinode-025000:/home/docker/cp-test_multinode-025000-m03_multinode-025000.txt: (16.5422921s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000-m03 "sudo cat /home/docker/cp-test.txt": (9.435342s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000 "sudo cat /home/docker/cp-test_multinode-025000-m03_multinode-025000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000 "sudo cat /home/docker/cp-test_multinode-025000-m03_multinode-025000.txt": (9.3892134s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 cp multinode-025000-m03:/home/docker/cp-test.txt multinode-025000-m02:/home/docker/cp-test_multinode-025000-m03_multinode-025000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 cp multinode-025000-m03:/home/docker/cp-test.txt multinode-025000-m02:/home/docker/cp-test_multinode-025000-m03_multinode-025000-m02.txt: (16.4524866s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000-m03 "sudo cat /home/docker/cp-test.txt": (9.3765843s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000-m02 "sudo cat /home/docker/cp-test_multinode-025000-m03_multinode-025000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 ssh -n multinode-025000-m02 "sudo cat /home/docker/cp-test_multinode-025000-m03_multinode-025000-m02.txt": (9.3753251s)
--- PASS: TestMultiNode/serial/CopyFile (359.99s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (76.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 node stop m03: (24.7833568s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 status
E0612 14:54:51.926257    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-025000 status: exit status 7 (25.7445155s)

                                                
                                                
-- stdout --
	multinode-025000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-025000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-025000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0612 14:54:47.812297    9868 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-025000 status --alsologtostderr: exit status 7 (26.1355633s)

                                                
                                                
-- stdout --
	multinode-025000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-025000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-025000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0612 14:55:13.554811    9024 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0612 14:55:13.562977    9024 out.go:291] Setting OutFile to fd 1068 ...
	I0612 14:55:13.564173    9024 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 14:55:13.564260    9024 out.go:304] Setting ErrFile to fd 1628...
	I0612 14:55:13.564455    9024 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 14:55:13.578842    9024 out.go:298] Setting JSON to false
	I0612 14:55:13.578842    9024 mustload.go:65] Loading cluster: multinode-025000
	I0612 14:55:13.578842    9024 notify.go:220] Checking for updates...
	I0612 14:55:13.580066    9024 config.go:182] Loaded profile config "multinode-025000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 14:55:13.580066    9024 status.go:255] checking status of multinode-025000 ...
	I0612 14:55:13.580477    9024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 14:55:15.822285    9024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:55:15.822285    9024 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:55:15.822285    9024 status.go:330] multinode-025000 host status = "Running" (err=<nil>)
	I0612 14:55:15.822285    9024 host.go:66] Checking if "multinode-025000" exists ...
	I0612 14:55:15.823374    9024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 14:55:18.160390    9024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:55:18.160390    9024 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:55:18.161186    9024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 14:55:20.785327    9024 main.go:141] libmachine: [stdout =====>] : 172.23.198.154
	
	I0612 14:55:20.785542    9024 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:55:20.785542    9024 host.go:66] Checking if "multinode-025000" exists ...
	I0612 14:55:20.798045    9024 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 14:55:20.798045    9024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000 ).state
	I0612 14:55:22.933308    9024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:55:22.933308    9024 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:55:22.933308    9024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000 ).networkadapters[0]).ipaddresses[0]
	I0612 14:55:25.502825    9024 main.go:141] libmachine: [stdout =====>] : 172.23.198.154
	
	I0612 14:55:25.502825    9024 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:55:25.503800    9024 sshutil.go:53] new ssh client: &{IP:172.23.198.154 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000\id_rsa Username:docker}
	I0612 14:55:25.596514    9024 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7984539s)
	I0612 14:55:25.609613    9024 ssh_runner.go:195] Run: systemctl --version
	I0612 14:55:25.629221    9024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 14:55:25.654402    9024 kubeconfig.go:125] found "multinode-025000" server: "https://172.23.198.154:8443"
	I0612 14:55:25.654402    9024 api_server.go:166] Checking apiserver status ...
	I0612 14:55:25.667682    9024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0612 14:55:25.704264    9024 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1956/cgroup
	W0612 14:55:25.722759    9024 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1956/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0612 14:55:25.734899    9024 ssh_runner.go:195] Run: ls
	I0612 14:55:25.741852    9024 api_server.go:253] Checking apiserver healthz at https://172.23.198.154:8443/healthz ...
	I0612 14:55:25.750693    9024 api_server.go:279] https://172.23.198.154:8443/healthz returned 200:
	ok
	I0612 14:55:25.750693    9024 status.go:422] multinode-025000 apiserver status = Running (err=<nil>)
	I0612 14:55:25.750693    9024 status.go:257] multinode-025000 status: &{Name:multinode-025000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0612 14:55:25.750693    9024 status.go:255] checking status of multinode-025000-m02 ...
	I0612 14:55:25.751585    9024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 14:55:27.904708    9024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:55:27.904796    9024 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:55:27.904796    9024 status.go:330] multinode-025000-m02 host status = "Running" (err=<nil>)
	I0612 14:55:27.904796    9024 host.go:66] Checking if "multinode-025000-m02" exists ...
	I0612 14:55:27.905754    9024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 14:55:30.113437    9024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:55:30.113498    9024 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:55:30.113498    9024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 14:55:32.653788    9024 main.go:141] libmachine: [stdout =====>] : 172.23.196.105
	
	I0612 14:55:32.653788    9024 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:55:32.653788    9024 host.go:66] Checking if "multinode-025000-m02" exists ...
	I0612 14:55:32.665987    9024 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0612 14:55:32.665987    9024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m02 ).state
	I0612 14:55:34.788216    9024 main.go:141] libmachine: [stdout =====>] : Running
	
	I0612 14:55:34.789208    9024 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:55:34.789208    9024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-025000-m02 ).networkadapters[0]).ipaddresses[0]
	I0612 14:55:37.316235    9024 main.go:141] libmachine: [stdout =====>] : 172.23.196.105
	
	I0612 14:55:37.317119    9024 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:55:37.317345    9024 sshutil.go:53] new ssh client: &{IP:172.23.196.105 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-025000-m02\id_rsa Username:docker}
	I0612 14:55:37.412582    9024 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7465029s)
	I0612 14:55:37.423853    9024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0612 14:55:37.448358    9024 status.go:257] multinode-025000-m02 status: &{Name:multinode-025000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0612 14:55:37.448358    9024 status.go:255] checking status of multinode-025000-m03 ...
	I0612 14:55:37.449190    9024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-025000-m03 ).state
	I0612 14:55:39.554184    9024 main.go:141] libmachine: [stdout =====>] : Off
	
	I0612 14:55:39.554336    9024 main.go:141] libmachine: [stderr =====>] : 
	I0612 14:55:39.554637    9024 status.go:330] multinode-025000-m03 host status = "Stopped" (err=<nil>)
	I0612 14:55:39.554637    9024 status.go:343] host is not running, skipping remaining checks
	I0612 14:55:39.554637    9024 status.go:257] multinode-025000-m03 status: &{Name:multinode-025000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (76.66s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (184.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 node start m03 -v=7 --alsologtostderr
E0612 14:56:13.934440    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 node start m03 -v=7 --alsologtostderr: (2m28.3991675s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-025000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-025000 status -v=7 --alsologtostderr: (35.7868329s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (184.36s)

                                                
                                    
x
+
TestPreload (507.76s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-290400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0612 15:11:13.929726    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-290400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m19.0706039s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-290400 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-290400 image pull gcr.io/k8s-minikube/busybox: (8.1450636s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-290400
E0612 15:14:51.920080    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-290400: (38.0096527s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-290400 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0612 15:15:57.165739    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
E0612 15:16:13.940847    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-290400 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m34.1463601s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-290400 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-290400 image list: (7.1240616s)
helpers_test.go:175: Cleaning up "test-preload-290400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-290400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-290400: (41.2615509s)
--- PASS: TestPreload (507.76s)

                                                
                                    
x
+
TestScheduledStopWindows (323.23s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-218200 --memory=2048 --driver=hyperv
E0612 15:19:51.930422    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
E0612 15:21:13.932499    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-218200 --memory=2048 --driver=hyperv: (3m10.665699s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-218200 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-218200 --schedule 5m: (10.7064689s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-218200 -n scheduled-stop-218200
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-218200 -n scheduled-stop-218200: exit status 1 (10.0241973s)

                                                
                                                
** stderr ** 
	W0612 15:21:43.823735    8360 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-218200 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-218200 -- sudo systemctl show minikube-scheduled-stop --no-page: (9.5675526s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-218200 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-218200 --schedule 5s: (10.6307944s)
E0612 15:22:55.187280    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-218200
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-218200: exit status 7 (2.3821035s)

                                                
                                                
-- stdout --
	scheduled-stop-218200
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0612 15:23:14.072021    3264 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-218200 -n scheduled-stop-218200
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-218200 -n scheduled-stop-218200: exit status 7 (2.3019206s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0612 15:23:16.452155    9516 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-218200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-218200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-218200: (26.9352476s)
--- PASS: TestScheduledStopWindows (323.23s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (1127.22s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.2663742333.exe start -p running-upgrade-850900 --memory=2200 --vm-driver=hyperv
E0612 15:24:51.918663    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
E0612 15:26:13.939470    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.2663742333.exe start -p running-upgrade-850900 --memory=2200 --vm-driver=hyperv: (10m13.6538812s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-850900 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0612 15:34:51.933905    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-850900 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (7m24.0202178s)
helpers_test.go:175: Cleaning up "running-upgrade-850900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-850900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-850900: (1m8.6036512s)
--- PASS: TestRunningBinaryUpgrade (1127.22s)

                                                
                                    
x
+
TestKubernetesUpgrade (1269.08s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-850900 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-850900 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv: (5m41.9151519s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-850900
E0612 15:29:51.923757    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-850900: (34.4257044s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-850900 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-850900 status --format={{.Host}}: exit status 7 (2.332679s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0612 15:30:02.023770    2436 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-850900 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=hyperv
E0612 15:31:13.936678    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-850900 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=hyperv: (7m43.6114525s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-850900 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-850900 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-850900 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv: exit status 106 (217.0565ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-850900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19044
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0612 15:37:48.166151    5520 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-850900
	    minikube start -p kubernetes-upgrade-850900 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8509002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.1, by running:
	    
	    minikube start -p kubernetes-upgrade-850900 --kubernetes-version=v1.30.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-850900 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=hyperv
E0612 15:39:35.198315    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-850900 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=hyperv: (6m19.0824702s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-850900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-850900
E0612 15:44:51.923322    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-850900: (47.3022591s)
--- PASS: TestKubernetesUpgrade (1269.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.81s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (955.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.2355884295.exe start -p stopped-upgrade-437800 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.2355884295.exe start -p stopped-upgrade-437800 --memory=2200 --vm-driver=hyperv: (7m53.4826872s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.2355884295.exe -p stopped-upgrade-437800 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.2355884295.exe -p stopped-upgrade-437800 stop: (37.6730884s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-437800 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0612 15:32:37.183782    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-269100\client.crt: The system cannot find the path specified.
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-437800 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (7m24.6725002s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (955.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (10.33s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-437800
E0612 15:39:51.927581    1280 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-605800\client.crt: The system cannot find the path specified.
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-437800: (10.3335589s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (10.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-318100 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-318100 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (205.896ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-318100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19044
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0612 15:49:47.832209    8152 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.21s)

                                                
                                    

Test skip (30/200)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-269100 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-269100 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 14908: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.04s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-269100 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-269100 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0661357s)

                                                
                                                
-- stdout --
	* [functional-269100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19044
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0612 13:28:46.328301    9184 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0612 13:28:46.330344    9184 out.go:291] Setting OutFile to fd 972 ...
	I0612 13:28:46.330344    9184 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 13:28:46.330344    9184 out.go:304] Setting ErrFile to fd 1216...
	I0612 13:28:46.330344    9184 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 13:28:46.357915    9184 out.go:298] Setting JSON to false
	I0612 13:28:46.363515    9184 start.go:129] hostinfo: {"hostname":"minikube1","uptime":22479,"bootTime":1718201647,"procs":202,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4529 Build 19045.4529","kernelVersion":"10.0.19045.4529 Build 19045.4529","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0612 13:28:46.363580    9184 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0612 13:28:46.370343    9184 out.go:177] * [functional-269100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	I0612 13:28:46.372077    9184 notify.go:220] Checking for updates...
	I0612 13:28:46.377522    9184 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 13:28:46.380829    9184 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 13:28:46.383697    9184 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0612 13:28:46.386735    9184 out.go:177]   - MINIKUBE_LOCATION=19044
	I0612 13:28:46.391477    9184 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 13:28:46.395655    9184 config.go:182] Loaded profile config "functional-269100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 13:28:46.396711    9184 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:976: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.07s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-269100 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-269100 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0405998s)

                                                
                                                
-- stdout --
	* [functional-269100] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19044
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0612 13:28:51.453326    3392 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0612 13:28:51.456231    3392 out.go:291] Setting OutFile to fd 1512 ...
	I0612 13:28:51.456762    3392 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 13:28:51.456762    3392 out.go:304] Setting ErrFile to fd 1516...
	I0612 13:28:51.456762    3392 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0612 13:28:51.486847    3392 out.go:298] Setting JSON to false
	I0612 13:28:51.492271    3392 start.go:129] hostinfo: {"hostname":"minikube1","uptime":22484,"bootTime":1718201647,"procs":202,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4529 Build 19045.4529","kernelVersion":"10.0.19045.4529 Build 19045.4529","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0612 13:28:51.492271    3392 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0612 13:28:51.497826    3392 out.go:177] * [functional-269100] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4529 Build 19045.4529
	I0612 13:28:51.500684    3392 notify.go:220] Checking for updates...
	I0612 13:28:51.503478    3392 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0612 13:28:51.507006    3392 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0612 13:28:51.511302    3392 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0612 13:28:51.514168    3392 out.go:177]   - MINIKUBE_LOCATION=19044
	I0612 13:28:51.518433    3392 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0612 13:28:51.523317    3392 config.go:182] Loaded profile config "functional-269100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0612 13:28:51.524868    3392 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1021: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard